Test Report: Docker_Linux_containerd 21681

                    
                      595bbf5b740d7896a57580209f3c1775d52404c7:2025-10-08:41822
                    
                

Test fail (2/332)

Order failed test Duration
233 TestMultiNode/serial/AddNode 12.21
234 TestMultiNode/serial/MultiNodeLabels 1.88
x
+
TestMultiNode/serial/AddNode (12.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-439307 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-439307 -v=5 --alsologtostderr: exit status 80 (10.279892572s)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-439307 as [worker]
	* Starting "multinode-439307-m03" worker node in "multinode-439307" cluster
	* Pulling base image v0.0.48-1759745255-21703 ...
	* Preparing Kubernetes v1.34.1 on containerd 1.7.28 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 14:24:31.503412  661380 out.go:360] Setting OutFile to fd 1 ...
	I1008 14:24:31.503718  661380 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:24:31.503726  661380 out.go:374] Setting ErrFile to fd 2...
	I1008 14:24:31.503731  661380 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:24:31.503960  661380 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-513010/.minikube/bin
	I1008 14:24:31.504316  661380 mustload.go:65] Loading cluster: multinode-439307
	I1008 14:24:31.504705  661380 config.go:182] Loaded profile config "multinode-439307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1008 14:24:31.505135  661380 cli_runner.go:164] Run: docker container inspect multinode-439307 --format={{.State.Status}}
	I1008 14:24:31.522203  661380 host.go:66] Checking if "multinode-439307" exists ...
	I1008 14:24:31.522474  661380 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 14:24:31.580691  661380 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:false NGoroutines:65 SystemTime:2025-10-08 14:24:31.570749338 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 14:24:31.580805  661380 api_server.go:166] Checking apiserver status ...
	I1008 14:24:31.580849  661380 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:24:31.580888  661380 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307
	I1008 14:24:31.598058  661380 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33306 SSHKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307/id_rsa Username:docker}
	I1008 14:24:31.705848  661380 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1367/cgroup
	W1008 14:24:31.714292  661380 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1367/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1008 14:24:31.714345  661380 ssh_runner.go:195] Run: ls
	I1008 14:24:31.718176  661380 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1008 14:24:31.723066  661380 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1008 14:24:31.725056  661380 out.go:179] * Adding node m03 to cluster multinode-439307 as [worker]
	I1008 14:24:31.726619  661380 config.go:182] Loaded profile config "multinode-439307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1008 14:24:31.726784  661380 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/config.json ...
	I1008 14:24:31.728528  661380 out.go:179] * Starting "multinode-439307-m03" worker node in "multinode-439307" cluster
	I1008 14:24:31.729540  661380 cache.go:123] Beginning downloading kic base image for docker with containerd
	I1008 14:24:31.730718  661380 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1008 14:24:31.732198  661380 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1008 14:24:31.732231  661380 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-513010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1008 14:24:31.732241  661380 cache.go:58] Caching tarball of preloaded images
	I1008 14:24:31.732289  661380 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 14:24:31.732319  661380 preload.go:233] Found /home/jenkins/minikube-integration/21681-513010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1008 14:24:31.732327  661380 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1008 14:24:31.732427  661380 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/config.json ...
	I1008 14:24:31.753268  661380 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1008 14:24:31.753290  661380 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1008 14:24:31.753310  661380 cache.go:232] Successfully downloaded all kic artifacts
	I1008 14:24:31.753345  661380 start.go:360] acquireMachinesLock for multinode-439307-m03: {Name:mkc57b0699e109bd3e6a21447d35a5a5dbc2c025 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 14:24:31.753459  661380 start.go:364] duration metric: took 89.211µs to acquireMachinesLock for "multinode-439307-m03"
	I1008 14:24:31.753489  661380 start.go:93] Provisioning new machine with config: &{Name:multinode-439307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:multinode-439307 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.67.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:false Worker:true} {Name:m03 IP: Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false
kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}
	I1008 14:24:31.753626  661380 start.go:125] createHost starting for "m03" (driver="docker")
	I1008 14:24:31.755481  661380 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1008 14:24:31.755601  661380 start.go:159] libmachine.API.Create for "multinode-439307" (driver="docker")
	I1008 14:24:31.755633  661380 client.go:168] LocalClient.Create starting
	I1008 14:24:31.755724  661380 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca.pem
	I1008 14:24:31.755768  661380 main.go:141] libmachine: Decoding PEM data...
	I1008 14:24:31.755790  661380 main.go:141] libmachine: Parsing certificate...
	I1008 14:24:31.755858  661380 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-513010/.minikube/certs/cert.pem
	I1008 14:24:31.755887  661380 main.go:141] libmachine: Decoding PEM data...
	I1008 14:24:31.755904  661380 main.go:141] libmachine: Parsing certificate...
	I1008 14:24:31.756194  661380 cli_runner.go:164] Run: docker network inspect multinode-439307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 14:24:31.773512  661380 network_create.go:77] Found existing network {name:multinode-439307 subnet:0xc00150bad0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 67 1] mtu:1500}
	I1008 14:24:31.773574  661380 kic.go:121] calculated static IP "192.168.67.4" for the "multinode-439307-m03" container
	I1008 14:24:31.773658  661380 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1008 14:24:31.791196  661380 cli_runner.go:164] Run: docker volume create multinode-439307-m03 --label name.minikube.sigs.k8s.io=multinode-439307-m03 --label created_by.minikube.sigs.k8s.io=true
	I1008 14:24:31.808906  661380 oci.go:103] Successfully created a docker volume multinode-439307-m03
	I1008 14:24:31.809067  661380 cli_runner.go:164] Run: docker run --rm --name multinode-439307-m03-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-439307-m03 --entrypoint /usr/bin/test -v multinode-439307-m03:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1008 14:24:32.191922  661380 oci.go:107] Successfully prepared a docker volume multinode-439307-m03
	I1008 14:24:32.191990  661380 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1008 14:24:32.192021  661380 kic.go:194] Starting extracting preloaded images to volume ...
	I1008 14:24:32.192114  661380 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-513010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-439307-m03:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1008 14:24:36.576077  661380 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-513010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-439307-m03:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.383913167s)
	I1008 14:24:36.576107  661380 kic.go:203] duration metric: took 4.384083442s to extract preloaded images to volume ...
	W1008 14:24:36.576193  661380 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1008 14:24:36.576242  661380 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1008 14:24:36.576290  661380 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1008 14:24:36.632796  661380 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-439307-m03 --name multinode-439307-m03 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-439307-m03 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-439307-m03 --network multinode-439307 --ip 192.168.67.4 --volume multinode-439307-m03:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1008 14:24:36.911015  661380 cli_runner.go:164] Run: docker container inspect multinode-439307-m03 --format={{.State.Running}}
	I1008 14:24:36.929548  661380 cli_runner.go:164] Run: docker container inspect multinode-439307-m03 --format={{.State.Status}}
	I1008 14:24:36.947649  661380 cli_runner.go:164] Run: docker exec multinode-439307-m03 stat /var/lib/dpkg/alternatives/iptables
	I1008 14:24:36.992847  661380 oci.go:144] the created container "multinode-439307-m03" has a running status.
	I1008 14:24:36.992885  661380 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307-m03/id_rsa...
	I1008 14:24:37.503926  661380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307-m03/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1008 14:24:37.503988  661380 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307-m03/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1008 14:24:37.529265  661380 cli_runner.go:164] Run: docker container inspect multinode-439307-m03 --format={{.State.Status}}
	I1008 14:24:37.547284  661380 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1008 14:24:37.547312  661380 kic_runner.go:114] Args: [docker exec --privileged multinode-439307-m03 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1008 14:24:37.593870  661380 cli_runner.go:164] Run: docker container inspect multinode-439307-m03 --format={{.State.Status}}
	I1008 14:24:37.612364  661380 machine.go:93] provisionDockerMachine start ...
	I1008 14:24:37.612466  661380 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307-m03
	I1008 14:24:37.631009  661380 main.go:141] libmachine: Using SSH client type: native
	I1008 14:24:37.631268  661380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33316 <nil> <nil>}
	I1008 14:24:37.631281  661380 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 14:24:37.778664  661380 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-439307-m03
	
	I1008 14:24:37.778691  661380 ubuntu.go:182] provisioning hostname "multinode-439307-m03"
	I1008 14:24:37.778762  661380 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307-m03
	I1008 14:24:37.797218  661380 main.go:141] libmachine: Using SSH client type: native
	I1008 14:24:37.797493  661380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33316 <nil> <nil>}
	I1008 14:24:37.797515  661380 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-439307-m03 && echo "multinode-439307-m03" | sudo tee /etc/hostname
	I1008 14:24:37.954008  661380 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-439307-m03
	
	I1008 14:24:37.954092  661380 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307-m03
	I1008 14:24:37.971585  661380 main.go:141] libmachine: Using SSH client type: native
	I1008 14:24:37.971806  661380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33316 <nil> <nil>}
	I1008 14:24:37.971830  661380 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-439307-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-439307-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-439307-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 14:24:38.117676  661380 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 14:24:38.117707  661380 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-513010/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-513010/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-513010/.minikube}
	I1008 14:24:38.117745  661380 ubuntu.go:190] setting up certificates
	I1008 14:24:38.117759  661380 provision.go:84] configureAuth start
	I1008 14:24:38.117820  661380 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-439307-m03
	I1008 14:24:38.135537  661380 provision.go:143] copyHostCerts
	I1008 14:24:38.135581  661380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21681-513010/.minikube/ca.pem
	I1008 14:24:38.135617  661380 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-513010/.minikube/ca.pem, removing ...
	I1008 14:24:38.135641  661380 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-513010/.minikube/ca.pem
	I1008 14:24:38.135720  661380 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-513010/.minikube/ca.pem (1078 bytes)
	I1008 14:24:38.135837  661380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21681-513010/.minikube/cert.pem
	I1008 14:24:38.135864  661380 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-513010/.minikube/cert.pem, removing ...
	I1008 14:24:38.135872  661380 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-513010/.minikube/cert.pem
	I1008 14:24:38.135917  661380 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-513010/.minikube/cert.pem (1123 bytes)
	I1008 14:24:38.136032  661380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21681-513010/.minikube/key.pem
	I1008 14:24:38.136058  661380 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-513010/.minikube/key.pem, removing ...
	I1008 14:24:38.136068  661380 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-513010/.minikube/key.pem
	I1008 14:24:38.136115  661380 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-513010/.minikube/key.pem (1675 bytes)
	I1008 14:24:38.136204  661380 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-513010/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca-key.pem org=jenkins.multinode-439307-m03 san=[127.0.0.1 192.168.67.4 localhost minikube multinode-439307-m03]
	I1008 14:24:38.432676  661380 provision.go:177] copyRemoteCerts
	I1008 14:24:38.432761  661380 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 14:24:38.432834  661380 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307-m03
	I1008 14:24:38.450974  661380 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33316 SSHKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307-m03/id_rsa Username:docker}
	I1008 14:24:38.554844  661380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1008 14:24:38.554934  661380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 14:24:38.576103  661380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1008 14:24:38.576182  661380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1008 14:24:38.594092  661380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1008 14:24:38.594205  661380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1008 14:24:38.612932  661380 provision.go:87] duration metric: took 495.153996ms to configureAuth
	I1008 14:24:38.612966  661380 ubuntu.go:206] setting minikube options for container-runtime
	I1008 14:24:38.613233  661380 config.go:182] Loaded profile config "multinode-439307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1008 14:24:38.613250  661380 machine.go:96] duration metric: took 1.000862975s to provisionDockerMachine
	I1008 14:24:38.613258  661380 client.go:171] duration metric: took 6.857615152s to LocalClient.Create
	I1008 14:24:38.613280  661380 start.go:167] duration metric: took 6.857680336s to libmachine.API.Create "multinode-439307"
	I1008 14:24:38.613294  661380 start.go:293] postStartSetup for "multinode-439307-m03" (driver="docker")
	I1008 14:24:38.613304  661380 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 14:24:38.613354  661380 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 14:24:38.613392  661380 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307-m03
	I1008 14:24:38.631341  661380 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33316 SSHKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307-m03/id_rsa Username:docker}
	I1008 14:24:38.740533  661380 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 14:24:38.744305  661380 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 14:24:38.744331  661380 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 14:24:38.744343  661380 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-513010/.minikube/addons for local assets ...
	I1008 14:24:38.744392  661380 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-513010/.minikube/files for local assets ...
	I1008 14:24:38.744482  661380 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-513010/.minikube/files/etc/ssl/certs/5167872.pem -> 5167872.pem in /etc/ssl/certs
	I1008 14:24:38.744495  661380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/files/etc/ssl/certs/5167872.pem -> /etc/ssl/certs/5167872.pem
	I1008 14:24:38.744596  661380 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 14:24:38.752519  661380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/files/etc/ssl/certs/5167872.pem --> /etc/ssl/certs/5167872.pem (1708 bytes)
	I1008 14:24:38.773530  661380 start.go:296] duration metric: took 160.218ms for postStartSetup
	I1008 14:24:38.774021  661380 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-439307-m03
	I1008 14:24:38.790724  661380 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/config.json ...
	I1008 14:24:38.791012  661380 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 14:24:38.791058  661380 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307-m03
	I1008 14:24:38.809074  661380 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33316 SSHKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307-m03/id_rsa Username:docker}
	I1008 14:24:38.910315  661380 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 14:24:38.915033  661380 start.go:128] duration metric: took 7.161389622s to createHost
	I1008 14:24:38.915060  661380 start.go:83] releasing machines lock for "multinode-439307-m03", held for 7.161587943s
	I1008 14:24:38.915141  661380 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-439307-m03
	I1008 14:24:38.933280  661380 ssh_runner.go:195] Run: systemctl --version
	I1008 14:24:38.933326  661380 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307-m03
	I1008 14:24:38.933355  661380 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 14:24:38.933418  661380 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307-m03
	I1008 14:24:38.952475  661380 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33316 SSHKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307-m03/id_rsa Username:docker}
	I1008 14:24:38.952825  661380 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33316 SSHKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307-m03/id_rsa Username:docker}
	I1008 14:24:39.055164  661380 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 14:24:39.104663  661380 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 14:24:39.104730  661380 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 14:24:39.131676  661380 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 14:24:39.131706  661380 start.go:495] detecting cgroup driver to use...
	I1008 14:24:39.131742  661380 detect.go:190] detected "systemd" cgroup driver on host os
	I1008 14:24:39.131808  661380 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1008 14:24:39.147629  661380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1008 14:24:39.161143  661380 docker.go:218] disabling cri-docker service (if available) ...
	I1008 14:24:39.161203  661380 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 14:24:39.178663  661380 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 14:24:39.196725  661380 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 14:24:39.278030  661380 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 14:24:39.365597  661380 docker.go:234] disabling docker service ...
	I1008 14:24:39.365664  661380 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 14:24:39.385105  661380 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 14:24:39.397955  661380 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 14:24:39.481537  661380 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 14:24:39.562465  661380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 14:24:39.575732  661380 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 14:24:39.591093  661380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1008 14:24:39.602384  661380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1008 14:24:39.612292  661380 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1008 14:24:39.612374  661380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1008 14:24:39.622133  661380 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1008 14:24:39.631552  661380 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1008 14:24:39.640967  661380 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1008 14:24:39.650649  661380 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 14:24:39.660016  661380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1008 14:24:39.669597  661380 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1008 14:24:39.679285  661380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1008 14:24:39.688793  661380 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 14:24:39.696882  661380 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 14:24:39.704845  661380 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 14:24:39.783731  661380 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1008 14:24:39.892869  661380 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1008 14:24:39.892945  661380 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1008 14:24:39.897222  661380 start.go:563] Will wait 60s for crictl version
	I1008 14:24:39.897273  661380 ssh_runner.go:195] Run: which crictl
	I1008 14:24:39.900992  661380 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 14:24:39.925343  661380 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.28
	RuntimeApiVersion:  v1
	I1008 14:24:39.925417  661380 ssh_runner.go:195] Run: containerd --version
	I1008 14:24:39.951454  661380 ssh_runner.go:195] Run: containerd --version
	I1008 14:24:39.978998  661380 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 1.7.28 ...
	I1008 14:24:39.980242  661380 cli_runner.go:164] Run: docker network inspect multinode-439307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 14:24:39.998071  661380 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I1008 14:24:40.002653  661380 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 14:24:40.013276  661380 mustload.go:65] Loading cluster: multinode-439307
	I1008 14:24:40.013523  661380 config.go:182] Loaded profile config "multinode-439307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1008 14:24:40.013742  661380 cli_runner.go:164] Run: docker container inspect multinode-439307 --format={{.State.Status}}
	I1008 14:24:40.030860  661380 host.go:66] Checking if "multinode-439307" exists ...
	I1008 14:24:40.031165  661380 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307 for IP: 192.168.67.4
	I1008 14:24:40.031179  661380 certs.go:195] generating shared ca certs ...
	I1008 14:24:40.031197  661380 certs.go:227] acquiring lock for ca certs: {Name:mk57aa9b2383fcc0908491da1ce926c707ff69a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:24:40.031364  661380 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-513010/.minikube/ca.key
	I1008 14:24:40.031427  661380 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-513010/.minikube/proxy-client-ca.key
	I1008 14:24:40.031445  661380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1008 14:24:40.031467  661380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1008 14:24:40.031485  661380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1008 14:24:40.031502  661380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1008 14:24:40.031574  661380 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/516787.pem (1338 bytes)
	W1008 14:24:40.031607  661380 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-513010/.minikube/certs/516787_empty.pem, impossibly tiny 0 bytes
	I1008 14:24:40.031617  661380 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 14:24:40.031646  661380 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca.pem (1078 bytes)
	I1008 14:24:40.031671  661380 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/cert.pem (1123 bytes)
	I1008 14:24:40.031694  661380 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/key.pem (1675 bytes)
	I1008 14:24:40.031736  661380 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-513010/.minikube/files/etc/ssl/certs/5167872.pem (1708 bytes)
	I1008 14:24:40.031774  661380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:24:40.031787  661380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/516787.pem -> /usr/share/ca-certificates/516787.pem
	I1008 14:24:40.031799  661380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/files/etc/ssl/certs/5167872.pem -> /usr/share/ca-certificates/5167872.pem
	I1008 14:24:40.031819  661380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 14:24:40.051762  661380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1008 14:24:40.070015  661380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 14:24:40.088946  661380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1008 14:24:40.106920  661380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 14:24:40.127565  661380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/certs/516787.pem --> /usr/share/ca-certificates/516787.pem (1338 bytes)
	I1008 14:24:40.145655  661380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/files/etc/ssl/certs/5167872.pem --> /usr/share/ca-certificates/5167872.pem (1708 bytes)
	I1008 14:24:40.163310  661380 ssh_runner.go:195] Run: openssl version
	I1008 14:24:40.170037  661380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 14:24:40.178821  661380 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:24:40.183226  661380 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 14:03 /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:24:40.183309  661380 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:24:40.219078  661380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 14:24:40.228740  661380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/516787.pem && ln -fs /usr/share/ca-certificates/516787.pem /etc/ssl/certs/516787.pem"
	I1008 14:24:40.237915  661380 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/516787.pem
	I1008 14:24:40.242516  661380 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 14:09 /usr/share/ca-certificates/516787.pem
	I1008 14:24:40.242603  661380 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/516787.pem
	I1008 14:24:40.278280  661380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/516787.pem /etc/ssl/certs/51391683.0"
	I1008 14:24:40.288345  661380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5167872.pem && ln -fs /usr/share/ca-certificates/5167872.pem /etc/ssl/certs/5167872.pem"
	I1008 14:24:40.297682  661380 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5167872.pem
	I1008 14:24:40.301713  661380 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 14:09 /usr/share/ca-certificates/5167872.pem
	I1008 14:24:40.301777  661380 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5167872.pem
	I1008 14:24:40.339504  661380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5167872.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 14:24:40.349876  661380 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 14:24:40.354004  661380 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1008 14:24:40.354048  661380 kubeadm.go:934] updating node {m03 192.168.67.4 0 v1.34.1  false true} ...
	I1008 14:24:40.354193  661380 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-439307-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:multinode-439307 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 14:24:40.354254  661380 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 14:24:40.362626  661380 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 14:24:40.362688  661380 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1008 14:24:40.370788  661380 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I1008 14:24:40.383722  661380 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 14:24:40.399089  661380 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I1008 14:24:40.402842  661380 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 14:24:40.413206  661380 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 14:24:40.491846  661380 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 14:24:40.516623  661380 host.go:66] Checking if "multinode-439307" exists ...
	I1008 14:24:40.516905  661380 start.go:317] joinCluster: &{Name:multinode-439307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:multinode-439307 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.67.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:false Worker:true} {Name:m03 IP:192.168.67.4 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:fa
lse kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 14:24:40.517090  661380 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1008 14:24:40.517140  661380 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307
	I1008 14:24:40.535598  661380 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33306 SSHKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307/id_rsa Username:docker}
	I1008 14:24:40.687024  661380 start.go:343] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.67.4 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}
	I1008 14:24:40.687109  661380 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 8n8r8s.dukoa1mhefvohilp --discovery-token-ca-cert-hash sha256:29d08006b7495c78b5f27aaa9701a82f373226e18456de05d156e89bccfbd06f --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=multinode-439307-m03"
	I1008 14:24:41.456440  661380 ssh_runner.go:195] Run: sudo /bin/bash -c "systemctl daemon-reload && systemctl enable kubelet && systemctl start kubelet"
	I1008 14:24:41.657201  661380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-439307-m03 minikube.k8s.io/updated_at=2025_10_08T14_24_41_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9090b00fbf2832bf29026571965024d88b63d555 minikube.k8s.io/name=multinode-439307 minikube.k8s.io/primary=false
	I1008 14:24:41.724463  661380 start.go:319] duration metric: took 1.207555044s to joinCluster
	I1008 14:24:41.726377  661380 out.go:203] 
	W1008 14:24:41.727647  661380 out.go:285] X Exiting due to GUEST_NODE_ADD: failed to add node: join node to cluster: error applying worker node "m03" label: apply node labels: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-439307-m03 minikube.k8s.io/updated_at=2025_10_08T14_24_41_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9090b00fbf2832bf29026571965024d88b63d555 minikube.k8s.io/name=multinode-439307 minikube.k8s.io/primary=false: Process exited with status 1
	stdout:
	
	stderr:
	Error from server (NotFound): nodes "multinode-439307-m03" not found
	
	X Exiting due to GUEST_NODE_ADD: failed to add node: join node to cluster: error applying worker node "m03" label: apply node labels: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-439307-m03 minikube.k8s.io/updated_at=2025_10_08T14_24_41_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9090b00fbf2832bf29026571965024d88b63d555 minikube.k8s.io/name=multinode-439307 minikube.k8s.io/primary=false: Process exited with status 1
	stdout:
	
	stderr:
	Error from server (NotFound): nodes "multinode-439307-m03" not found
	
	W1008 14:24:41.727666  661380 out.go:285] * 
	* 
	W1008 14:24:41.732182  661380 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 14:24:41.733477  661380 out.go:203] 

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-linux-amd64 node add -p multinode-439307 -v=5 --alsologtostderr" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiNode/serial/AddNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiNode/serial/AddNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect multinode-439307
helpers_test.go:243: (dbg) docker inspect multinode-439307:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ba6a97f7663632c0b65e5a11595af47165321c8afb3b62eedf06b3d466307bba",
	        "Created": "2025-10-08T14:23:23.101908381Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 655454,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-08T14:23:23.137079331Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/ba6a97f7663632c0b65e5a11595af47165321c8afb3b62eedf06b3d466307bba/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ba6a97f7663632c0b65e5a11595af47165321c8afb3b62eedf06b3d466307bba/hostname",
	        "HostsPath": "/var/lib/docker/containers/ba6a97f7663632c0b65e5a11595af47165321c8afb3b62eedf06b3d466307bba/hosts",
	        "LogPath": "/var/lib/docker/containers/ba6a97f7663632c0b65e5a11595af47165321c8afb3b62eedf06b3d466307bba/ba6a97f7663632c0b65e5a11595af47165321c8afb3b62eedf06b3d466307bba-json.log",
	        "Name": "/multinode-439307",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-439307:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "multinode-439307",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ba6a97f7663632c0b65e5a11595af47165321c8afb3b62eedf06b3d466307bba",
	                "LowerDir": "/var/lib/docker/overlay2/eef9b106872faf72f2593d957c2542a8de83c33b483a2720ec6b85b17e327301-init/diff:/var/lib/docker/overlay2/97746716e496f19c0b3fdecffe1f175c04923b8f3f05ea2a8a25747dfddb9999/diff",
	                "MergedDir": "/var/lib/docker/overlay2/eef9b106872faf72f2593d957c2542a8de83c33b483a2720ec6b85b17e327301/merged",
	                "UpperDir": "/var/lib/docker/overlay2/eef9b106872faf72f2593d957c2542a8de83c33b483a2720ec6b85b17e327301/diff",
	                "WorkDir": "/var/lib/docker/overlay2/eef9b106872faf72f2593d957c2542a8de83c33b483a2720ec6b85b17e327301/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-439307",
	                "Source": "/var/lib/docker/volumes/multinode-439307/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-439307",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-439307",
	                "name.minikube.sigs.k8s.io": "multinode-439307",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dd4a1327be75cbe250d2a23b2c88f13f060fa136f90eabee1eecd426d6567242",
	            "SandboxKey": "/var/run/docker/netns/dd4a1327be75",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33306"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33307"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33310"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33308"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33309"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-439307": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ae:be:98:9b:84:54",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7e4823570a3f40e014e3b0688e11409f133ed3676e15bbaea99f537a7b7c50d6",
	                    "EndpointID": "60d8d5339fd7e699ccfe64b7708f7ac1dbc1925b92a76d8d9fc8cbcb32a7d344",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "multinode-439307",
	                        "ba6a97f76636"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-439307 -n multinode-439307
helpers_test.go:252: <<< TestMultiNode/serial/AddNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiNode/serial/AddNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p multinode-439307 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p multinode-439307 logs -n 25: (1.038227857s)
helpers_test.go:260: TestMultiNode/serial/AddNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                              ARGS                                                              │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ mount-start-2-801712 ssh -- ls /minikube-host                                                                                  │ mount-start-2-801712 │ jenkins │ v1.37.0 │ 08 Oct 25 14:23 UTC │ 08 Oct 25 14:23 UTC │
	│ delete  │ -p mount-start-1-785074 --alsologtostderr -v=5                                                                                 │ mount-start-1-785074 │ jenkins │ v1.37.0 │ 08 Oct 25 14:23 UTC │ 08 Oct 25 14:23 UTC │
	│ ssh     │ mount-start-2-801712 ssh -- ls /minikube-host                                                                                  │ mount-start-2-801712 │ jenkins │ v1.37.0 │ 08 Oct 25 14:23 UTC │ 08 Oct 25 14:23 UTC │
	│ stop    │ -p mount-start-2-801712                                                                                                        │ mount-start-2-801712 │ jenkins │ v1.37.0 │ 08 Oct 25 14:23 UTC │ 08 Oct 25 14:23 UTC │
	│ start   │ -p mount-start-2-801712                                                                                                        │ mount-start-2-801712 │ jenkins │ v1.37.0 │ 08 Oct 25 14:23 UTC │ 08 Oct 25 14:23 UTC │
	│ ssh     │ mount-start-2-801712 ssh -- ls /minikube-host                                                                                  │ mount-start-2-801712 │ jenkins │ v1.37.0 │ 08 Oct 25 14:23 UTC │ 08 Oct 25 14:23 UTC │
	│ delete  │ -p mount-start-2-801712                                                                                                        │ mount-start-2-801712 │ jenkins │ v1.37.0 │ 08 Oct 25 14:23 UTC │ 08 Oct 25 14:23 UTC │
	│ delete  │ -p mount-start-1-785074                                                                                                        │ mount-start-1-785074 │ jenkins │ v1.37.0 │ 08 Oct 25 14:23 UTC │ 08 Oct 25 14:23 UTC │
	│ start   │ -p multinode-439307 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd │ multinode-439307     │ jenkins │ v1.37.0 │ 08 Oct 25 14:23 UTC │ 08 Oct 25 14:24 UTC │
	│ kubectl │ -p multinode-439307 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml                                              │ multinode-439307     │ jenkins │ v1.37.0 │ 08 Oct 25 14:24 UTC │ 08 Oct 25 14:24 UTC │
	│ kubectl │ -p multinode-439307 -- rollout status deployment/busybox                                                                       │ multinode-439307     │ jenkins │ v1.37.0 │ 08 Oct 25 14:24 UTC │ 08 Oct 25 14:24 UTC │
	│ kubectl │ -p multinode-439307 -- get pods -o jsonpath='{.items[*].status.podIP}'                                                         │ multinode-439307     │ jenkins │ v1.37.0 │ 08 Oct 25 14:24 UTC │ 08 Oct 25 14:24 UTC │
	│ kubectl │ -p multinode-439307 -- get pods -o jsonpath='{.items[*].metadata.name}'                                                        │ multinode-439307     │ jenkins │ v1.37.0 │ 08 Oct 25 14:24 UTC │ 08 Oct 25 14:24 UTC │
	│ kubectl │ -p multinode-439307 -- exec busybox-7b57f96db7-9qspn -- nslookup kubernetes.io                                                 │ multinode-439307     │ jenkins │ v1.37.0 │ 08 Oct 25 14:24 UTC │ 08 Oct 25 14:24 UTC │
	│ kubectl │ -p multinode-439307 -- exec busybox-7b57f96db7-n6rvn -- nslookup kubernetes.io                                                 │ multinode-439307     │ jenkins │ v1.37.0 │ 08 Oct 25 14:24 UTC │ 08 Oct 25 14:24 UTC │
	│ kubectl │ -p multinode-439307 -- exec busybox-7b57f96db7-9qspn -- nslookup kubernetes.default                                            │ multinode-439307     │ jenkins │ v1.37.0 │ 08 Oct 25 14:24 UTC │ 08 Oct 25 14:24 UTC │
	│ kubectl │ -p multinode-439307 -- exec busybox-7b57f96db7-n6rvn -- nslookup kubernetes.default                                            │ multinode-439307     │ jenkins │ v1.37.0 │ 08 Oct 25 14:24 UTC │ 08 Oct 25 14:24 UTC │
	│ kubectl │ -p multinode-439307 -- exec busybox-7b57f96db7-9qspn -- nslookup kubernetes.default.svc.cluster.local                          │ multinode-439307     │ jenkins │ v1.37.0 │ 08 Oct 25 14:24 UTC │ 08 Oct 25 14:24 UTC │
	│ kubectl │ -p multinode-439307 -- exec busybox-7b57f96db7-n6rvn -- nslookup kubernetes.default.svc.cluster.local                          │ multinode-439307     │ jenkins │ v1.37.0 │ 08 Oct 25 14:24 UTC │ 08 Oct 25 14:24 UTC │
	│ kubectl │ -p multinode-439307 -- get pods -o jsonpath='{.items[*].metadata.name}'                                                        │ multinode-439307     │ jenkins │ v1.37.0 │ 08 Oct 25 14:24 UTC │ 08 Oct 25 14:24 UTC │
	│ kubectl │ -p multinode-439307 -- exec busybox-7b57f96db7-9qspn -- sh -c nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3    │ multinode-439307     │ jenkins │ v1.37.0 │ 08 Oct 25 14:24 UTC │ 08 Oct 25 14:24 UTC │
	│ kubectl │ -p multinode-439307 -- exec busybox-7b57f96db7-9qspn -- sh -c ping -c 1 192.168.67.1                                           │ multinode-439307     │ jenkins │ v1.37.0 │ 08 Oct 25 14:24 UTC │ 08 Oct 25 14:24 UTC │
	│ kubectl │ -p multinode-439307 -- exec busybox-7b57f96db7-n6rvn -- sh -c nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3    │ multinode-439307     │ jenkins │ v1.37.0 │ 08 Oct 25 14:24 UTC │ 08 Oct 25 14:24 UTC │
	│ kubectl │ -p multinode-439307 -- exec busybox-7b57f96db7-n6rvn -- sh -c ping -c 1 192.168.67.1                                           │ multinode-439307     │ jenkins │ v1.37.0 │ 08 Oct 25 14:24 UTC │ 08 Oct 25 14:24 UTC │
	│ node    │ add -p multinode-439307 -v=5 --alsologtostderr                                                                                 │ multinode-439307     │ jenkins │ v1.37.0 │ 08 Oct 25 14:24 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 14:23:17
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 14:23:17.956987  654880 out.go:360] Setting OutFile to fd 1 ...
	I1008 14:23:17.957267  654880 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:23:17.957278  654880 out.go:374] Setting ErrFile to fd 2...
	I1008 14:23:17.957285  654880 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:23:17.957560  654880 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-513010/.minikube/bin
	I1008 14:23:17.958095  654880 out.go:368] Setting JSON to false
	I1008 14:23:17.959069  654880 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":7547,"bootTime":1759925851,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 14:23:17.959183  654880 start.go:141] virtualization: kvm guest
	I1008 14:23:17.961334  654880 out.go:179] * [multinode-439307] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1008 14:23:17.962856  654880 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 14:23:17.962854  654880 notify.go:220] Checking for updates...
	I1008 14:23:17.966278  654880 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 14:23:17.967770  654880 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-513010/kubeconfig
	I1008 14:23:17.969198  654880 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-513010/.minikube
	I1008 14:23:17.970595  654880 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 14:23:17.971850  654880 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 14:23:17.973258  654880 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 14:23:17.996300  654880 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1008 14:23:17.996406  654880 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 14:23:18.050277  654880 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-08 14:23:18.040372301 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 14:23:18.050390  654880 docker.go:318] overlay module found
	I1008 14:23:18.052374  654880 out.go:179] * Using the docker driver based on user configuration
	I1008 14:23:18.054067  654880 start.go:305] selected driver: docker
	I1008 14:23:18.054089  654880 start.go:925] validating driver "docker" against <nil>
	I1008 14:23:18.054101  654880 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 14:23:18.054660  654880 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 14:23:18.107655  654880 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-08 14:23:18.098187471 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 14:23:18.107832  654880 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1008 14:23:18.108067  654880 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 14:23:18.109831  654880 out.go:179] * Using Docker driver with root privileges
	I1008 14:23:18.111024  654880 cni.go:84] Creating CNI manager for ""
	I1008 14:23:18.111088  654880 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1008 14:23:18.111100  654880 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1008 14:23:18.111162  654880 start.go:349] cluster config:
	{Name:multinode-439307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:multinode-439307 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 14:23:18.112399  654880 out.go:179] * Starting "multinode-439307" primary control-plane node in "multinode-439307" cluster
	I1008 14:23:18.113554  654880 cache.go:123] Beginning downloading kic base image for docker with containerd
	I1008 14:23:18.114910  654880 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1008 14:23:18.116063  654880 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1008 14:23:18.116103  654880 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 14:23:18.116106  654880 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-513010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1008 14:23:18.116207  654880 cache.go:58] Caching tarball of preloaded images
	I1008 14:23:18.116291  654880 preload.go:233] Found /home/jenkins/minikube-integration/21681-513010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1008 14:23:18.116302  654880 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1008 14:23:18.116625  654880 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/config.json ...
	I1008 14:23:18.116652  654880 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/config.json: {Name:mk22bd6f1fa53f8e3127efb61d08a257a62e2626 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:23:18.136591  654880 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1008 14:23:18.136639  654880 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1008 14:23:18.136657  654880 cache.go:232] Successfully downloaded all kic artifacts
	I1008 14:23:18.136684  654880 start.go:360] acquireMachinesLock for multinode-439307: {Name:mkf4360b9146660aeff5a4ae109e04568869fc59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 14:23:18.136783  654880 start.go:364] duration metric: took 81.212µs to acquireMachinesLock for "multinode-439307"
	I1008 14:23:18.136807  654880 start.go:93] Provisioning new machine with config: &{Name:multinode-439307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:multinode-439307 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1008 14:23:18.136878  654880 start.go:125] createHost starting for "" (driver="docker")
	I1008 14:23:18.138834  654880 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1008 14:23:18.139088  654880 start.go:159] libmachine.API.Create for "multinode-439307" (driver="docker")
	I1008 14:23:18.139120  654880 client.go:168] LocalClient.Create starting
	I1008 14:23:18.139174  654880 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca.pem
	I1008 14:23:18.139205  654880 main.go:141] libmachine: Decoding PEM data...
	I1008 14:23:18.139219  654880 main.go:141] libmachine: Parsing certificate...
	I1008 14:23:18.139269  654880 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-513010/.minikube/certs/cert.pem
	I1008 14:23:18.139287  654880 main.go:141] libmachine: Decoding PEM data...
	I1008 14:23:18.139297  654880 main.go:141] libmachine: Parsing certificate...
	I1008 14:23:18.139588  654880 cli_runner.go:164] Run: docker network inspect multinode-439307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1008 14:23:18.155901  654880 cli_runner.go:211] docker network inspect multinode-439307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1008 14:23:18.155965  654880 network_create.go:284] running [docker network inspect multinode-439307] to gather additional debugging logs...
	I1008 14:23:18.156005  654880 cli_runner.go:164] Run: docker network inspect multinode-439307
	W1008 14:23:18.172653  654880 cli_runner.go:211] docker network inspect multinode-439307 returned with exit code 1
	I1008 14:23:18.172693  654880 network_create.go:287] error running [docker network inspect multinode-439307]: docker network inspect multinode-439307: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-439307 not found
	I1008 14:23:18.172713  654880 network_create.go:289] output of [docker network inspect multinode-439307]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-439307 not found
	
	** /stderr **
	I1008 14:23:18.172884  654880 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 14:23:18.189934  654880 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-579739baec73 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:82:69:9e:8b:7e:c1} reservation:<nil>}
	I1008 14:23:18.190282  654880 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-de056d86a4f7 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e2:00:90:f6:d9:cb} reservation:<nil>}
	I1008 14:23:18.190681  654880 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d36540}
	I1008 14:23:18.190708  654880 network_create.go:124] attempt to create docker network multinode-439307 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1008 14:23:18.190760  654880 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-439307 multinode-439307
	I1008 14:23:18.248882  654880 network_create.go:108] docker network multinode-439307 192.168.67.0/24 created
	I1008 14:23:18.248914  654880 kic.go:121] calculated static IP "192.168.67.2" for the "multinode-439307" container
	I1008 14:23:18.249056  654880 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1008 14:23:18.266495  654880 cli_runner.go:164] Run: docker volume create multinode-439307 --label name.minikube.sigs.k8s.io=multinode-439307 --label created_by.minikube.sigs.k8s.io=true
	I1008 14:23:18.284793  654880 oci.go:103] Successfully created a docker volume multinode-439307
	I1008 14:23:18.284903  654880 cli_runner.go:164] Run: docker run --rm --name multinode-439307-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-439307 --entrypoint /usr/bin/test -v multinode-439307:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1008 14:23:18.663779  654880 oci.go:107] Successfully prepared a docker volume multinode-439307
	I1008 14:23:18.663869  654880 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1008 14:23:18.663894  654880 kic.go:194] Starting extracting preloaded images to volume ...
	I1008 14:23:18.663972  654880 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-513010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-439307:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1008 14:23:23.029420  654880 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-513010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-439307:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.36536671s)
	I1008 14:23:23.029457  654880 kic.go:203] duration metric: took 4.365557889s to extract preloaded images to volume ...
	W1008 14:23:23.029548  654880 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1008 14:23:23.029580  654880 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1008 14:23:23.029617  654880 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1008 14:23:23.086211  654880 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-439307 --name multinode-439307 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-439307 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-439307 --network multinode-439307 --ip 192.168.67.2 --volume multinode-439307:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1008 14:23:23.354039  654880 cli_runner.go:164] Run: docker container inspect multinode-439307 --format={{.State.Running}}
	I1008 14:23:23.375428  654880 cli_runner.go:164] Run: docker container inspect multinode-439307 --format={{.State.Status}}
	I1008 14:23:23.393578  654880 cli_runner.go:164] Run: docker exec multinode-439307 stat /var/lib/dpkg/alternatives/iptables
	I1008 14:23:23.439666  654880 oci.go:144] the created container "multinode-439307" has a running status.
	I1008 14:23:23.439697  654880 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307/id_rsa...
	I1008 14:23:23.880004  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1008 14:23:23.880071  654880 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1008 14:23:23.906419  654880 cli_runner.go:164] Run: docker container inspect multinode-439307 --format={{.State.Status}}
	I1008 14:23:23.924677  654880 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1008 14:23:23.924697  654880 kic_runner.go:114] Args: [docker exec --privileged multinode-439307 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1008 14:23:23.977234  654880 cli_runner.go:164] Run: docker container inspect multinode-439307 --format={{.State.Status}}
	I1008 14:23:23.994199  654880 machine.go:93] provisionDockerMachine start ...
	I1008 14:23:23.994313  654880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307
	I1008 14:23:24.011535  654880 main.go:141] libmachine: Using SSH client type: native
	I1008 14:23:24.011821  654880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33306 <nil> <nil>}
	I1008 14:23:24.011834  654880 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 14:23:24.012536  654880 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47140->127.0.0.1:33306: read: connection reset by peer
	I1008 14:23:27.162380  654880 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-439307
	
	I1008 14:23:27.162426  654880 ubuntu.go:182] provisioning hostname "multinode-439307"
	I1008 14:23:27.162486  654880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307
	I1008 14:23:27.180732  654880 main.go:141] libmachine: Using SSH client type: native
	I1008 14:23:27.180972  654880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33306 <nil> <nil>}
	I1008 14:23:27.181011  654880 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-439307 && echo "multinode-439307" | sudo tee /etc/hostname
	I1008 14:23:27.339937  654880 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-439307
	
	I1008 14:23:27.340069  654880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307
	I1008 14:23:27.358403  654880 main.go:141] libmachine: Using SSH client type: native
	I1008 14:23:27.358642  654880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33306 <nil> <nil>}
	I1008 14:23:27.358660  654880 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-439307' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-439307/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-439307' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 14:23:27.507072  654880 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 14:23:27.507109  654880 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-513010/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-513010/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-513010/.minikube}
	I1008 14:23:27.507134  654880 ubuntu.go:190] setting up certificates
	I1008 14:23:27.507146  654880 provision.go:84] configureAuth start
	I1008 14:23:27.507227  654880 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-439307
	I1008 14:23:27.525723  654880 provision.go:143] copyHostCerts
	I1008 14:23:27.525774  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21681-513010/.minikube/cert.pem
	I1008 14:23:27.525813  654880 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-513010/.minikube/cert.pem, removing ...
	I1008 14:23:27.525825  654880 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-513010/.minikube/cert.pem
	I1008 14:23:27.525916  654880 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-513010/.minikube/cert.pem (1123 bytes)
	I1008 14:23:27.526089  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21681-513010/.minikube/key.pem
	I1008 14:23:27.526119  654880 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-513010/.minikube/key.pem, removing ...
	I1008 14:23:27.526129  654880 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-513010/.minikube/key.pem
	I1008 14:23:27.526175  654880 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-513010/.minikube/key.pem (1675 bytes)
	I1008 14:23:27.526250  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21681-513010/.minikube/ca.pem
	I1008 14:23:27.526274  654880 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-513010/.minikube/ca.pem, removing ...
	I1008 14:23:27.526283  654880 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-513010/.minikube/ca.pem
	I1008 14:23:27.526323  654880 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-513010/.minikube/ca.pem (1078 bytes)
	I1008 14:23:27.526398  654880 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-513010/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca-key.pem org=jenkins.multinode-439307 san=[127.0.0.1 192.168.67.2 localhost minikube multinode-439307]
	I1008 14:23:27.677124  654880 provision.go:177] copyRemoteCerts
	I1008 14:23:27.677186  654880 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 14:23:27.677229  654880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307
	I1008 14:23:27.696280  654880 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33306 SSHKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307/id_rsa Username:docker}
	I1008 14:23:27.800677  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1008 14:23:27.800760  654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1008 14:23:27.821249  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1008 14:23:27.821317  654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1008 14:23:27.839198  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1008 14:23:27.839275  654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1008 14:23:27.857535  654880 provision.go:87] duration metric: took 350.370022ms to configureAuth
	I1008 14:23:27.857570  654880 ubuntu.go:206] setting minikube options for container-runtime
	I1008 14:23:27.857755  654880 config.go:182] Loaded profile config "multinode-439307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1008 14:23:27.857770  654880 machine.go:96] duration metric: took 3.8635448s to provisionDockerMachine
	I1008 14:23:27.857780  654880 client.go:171] duration metric: took 9.718653028s to LocalClient.Create
	I1008 14:23:27.857826  654880 start.go:167] duration metric: took 9.718739942s to libmachine.API.Create "multinode-439307"
	I1008 14:23:27.857838  654880 start.go:293] postStartSetup for "multinode-439307" (driver="docker")
	I1008 14:23:27.857849  654880 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 14:23:27.857921  654880 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 14:23:27.857970  654880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307
	I1008 14:23:27.876246  654880 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33306 SSHKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307/id_rsa Username:docker}
	I1008 14:23:27.982611  654880 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 14:23:27.986361  654880 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 14:23:27.986391  654880 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 14:23:27.986402  654880 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-513010/.minikube/addons for local assets ...
	I1008 14:23:27.986465  654880 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-513010/.minikube/files for local assets ...
	I1008 14:23:27.986549  654880 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-513010/.minikube/files/etc/ssl/certs/5167872.pem -> 5167872.pem in /etc/ssl/certs
	I1008 14:23:27.986586  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/files/etc/ssl/certs/5167872.pem -> /etc/ssl/certs/5167872.pem
	I1008 14:23:27.986676  654880 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 14:23:27.994501  654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/files/etc/ssl/certs/5167872.pem --> /etc/ssl/certs/5167872.pem (1708 bytes)
	I1008 14:23:28.015944  654880 start.go:296] duration metric: took 158.091308ms for postStartSetup
	I1008 14:23:28.016330  654880 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-439307
	I1008 14:23:28.033722  654880 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/config.json ...
	I1008 14:23:28.034024  654880 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 14:23:28.034069  654880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307
	I1008 14:23:28.051472  654880 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33306 SSHKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307/id_rsa Username:docker}
	I1008 14:23:28.152696  654880 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 14:23:28.157578  654880 start.go:128] duration metric: took 10.02068325s to createHost
	I1008 14:23:28.157607  654880 start.go:83] releasing machines lock for "multinode-439307", held for 10.020812018s
	I1008 14:23:28.157686  654880 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-439307
	I1008 14:23:28.175043  654880 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 14:23:28.175118  654880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307
	I1008 14:23:28.175044  654880 ssh_runner.go:195] Run: cat /version.json
	I1008 14:23:28.175238  654880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307
	I1008 14:23:28.192842  654880 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33306 SSHKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307/id_rsa Username:docker}
	I1008 14:23:28.193859  654880 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33306 SSHKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307/id_rsa Username:docker}
	I1008 14:23:28.346450  654880 ssh_runner.go:195] Run: systemctl --version
	I1008 14:23:28.353340  654880 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 14:23:28.358122  654880 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 14:23:28.358188  654880 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 14:23:28.384439  654880 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 14:23:28.384464  654880 start.go:495] detecting cgroup driver to use...
	I1008 14:23:28.384495  654880 detect.go:190] detected "systemd" cgroup driver on host os
	I1008 14:23:28.384566  654880 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1008 14:23:28.399323  654880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1008 14:23:28.412378  654880 docker.go:218] disabling cri-docker service (if available) ...
	I1008 14:23:28.412440  654880 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 14:23:28.428847  654880 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 14:23:28.446687  654880 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 14:23:28.526136  654880 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 14:23:28.614080  654880 docker.go:234] disabling docker service ...
	I1008 14:23:28.614149  654880 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 14:23:28.633742  654880 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 14:23:28.647026  654880 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 14:23:28.727238  654880 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 14:23:28.808930  654880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 14:23:28.821761  654880 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 14:23:28.836040  654880 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1008 14:23:28.847491  654880 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1008 14:23:28.856854  654880 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1008 14:23:28.856920  654880 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1008 14:23:28.866133  654880 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1008 14:23:28.875367  654880 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1008 14:23:28.884374  654880 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1008 14:23:28.893574  654880 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 14:23:28.902220  654880 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1008 14:23:28.911486  654880 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1008 14:23:28.920623  654880 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1008 14:23:28.929996  654880 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 14:23:28.937926  654880 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 14:23:28.946203  654880 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 14:23:29.028153  654880 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1008 14:23:29.132493  654880 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1008 14:23:29.132559  654880 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1008 14:23:29.136824  654880 start.go:563] Will wait 60s for crictl version
	I1008 14:23:29.136879  654880 ssh_runner.go:195] Run: which crictl
	I1008 14:23:29.140620  654880 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 14:23:29.166990  654880 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.28
	RuntimeApiVersion:  v1
	I1008 14:23:29.167069  654880 ssh_runner.go:195] Run: containerd --version
	I1008 14:23:29.193758  654880 ssh_runner.go:195] Run: containerd --version
	I1008 14:23:29.222040  654880 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 1.7.28 ...
	I1008 14:23:29.223401  654880 cli_runner.go:164] Run: docker network inspect multinode-439307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 14:23:29.240948  654880 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I1008 14:23:29.245849  654880 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 14:23:29.256781  654880 kubeadm.go:883] updating cluster {Name:multinode-439307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:multinode-439307 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 14:23:29.256900  654880 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1008 14:23:29.256945  654880 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 14:23:29.282114  654880 containerd.go:627] all images are preloaded for containerd runtime.
	I1008 14:23:29.282137  654880 containerd.go:534] Images already preloaded, skipping extraction
	I1008 14:23:29.282188  654880 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 14:23:29.306940  654880 containerd.go:627] all images are preloaded for containerd runtime.
	I1008 14:23:29.306963  654880 cache_images.go:85] Images are preloaded, skipping loading
	I1008 14:23:29.306971  654880 kubeadm.go:934] updating node { 192.168.67.2 8443 v1.34.1 containerd true true} ...
	I1008 14:23:29.307091  654880 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-439307 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:multinode-439307 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 14:23:29.307158  654880 ssh_runner.go:195] Run: sudo crictl info
	I1008 14:23:29.333006  654880 cni.go:84] Creating CNI manager for ""
	I1008 14:23:29.333038  654880 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1008 14:23:29.333058  654880 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 14:23:29.333091  654880 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-439307 NodeName:multinode-439307 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 14:23:29.333227  654880 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "multinode-439307"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.67.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 14:23:29.333298  654880 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 14:23:29.341693  654880 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 14:23:29.341752  654880 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 14:23:29.350015  654880 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I1008 14:23:29.363305  654880 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 14:23:29.379485  654880 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2228 bytes)
	I1008 14:23:29.392555  654880 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I1008 14:23:29.396398  654880 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 14:23:29.406631  654880 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 14:23:29.483438  654880 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 14:23:29.509514  654880 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307 for IP: 192.168.67.2
	I1008 14:23:29.509542  654880 certs.go:195] generating shared ca certs ...
	I1008 14:23:29.509563  654880 certs.go:227] acquiring lock for ca certs: {Name:mk57aa9b2383fcc0908491da1ce926c707ff69a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:23:29.509734  654880 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-513010/.minikube/ca.key
	I1008 14:23:29.509788  654880 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-513010/.minikube/proxy-client-ca.key
	I1008 14:23:29.509802  654880 certs.go:257] generating profile certs ...
	I1008 14:23:29.509910  654880 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/client.key
	I1008 14:23:29.509939  654880 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/client.crt with IP's: []
	I1008 14:23:29.610645  654880 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/client.crt ...
	I1008 14:23:29.610679  654880 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/client.crt: {Name:mkf1a19119257c35c0be4630341107abefe0712a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:23:29.610870  654880 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/client.key ...
	I1008 14:23:29.610891  654880 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/client.key: {Name:mk49a676c10aed18805a93ab7df3049b7dcfa5b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:23:29.610988  654880 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/apiserver.key.4f7cecc8
	I1008 14:23:29.611006  654880 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/apiserver.crt.4f7cecc8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.67.2]
	I1008 14:23:29.809665  654880 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/apiserver.crt.4f7cecc8 ...
	I1008 14:23:29.809701  654880 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/apiserver.crt.4f7cecc8: {Name:mk049ea208d229fa055039856d3579ebb9e0840d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:23:29.809887  654880 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/apiserver.key.4f7cecc8 ...
	I1008 14:23:29.809902  654880 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/apiserver.key.4f7cecc8: {Name:mkbbd81466b2cdd0cb264ee782d6df895a6557f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:23:29.809991  654880 certs.go:382] copying /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/apiserver.crt.4f7cecc8 -> /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/apiserver.crt
	I1008 14:23:29.810098  654880 certs.go:386] copying /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/apiserver.key.4f7cecc8 -> /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/apiserver.key
	I1008 14:23:29.810163  654880 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/proxy-client.key
	I1008 14:23:29.810178  654880 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/proxy-client.crt with IP's: []
	I1008 14:23:30.434846  654880 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/proxy-client.crt ...
	I1008 14:23:30.434880  654880 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/proxy-client.crt: {Name:mk74033eb7b0061c1da9d5a1860ee35ec43567a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:23:30.435058  654880 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/proxy-client.key ...
	I1008 14:23:30.435073  654880 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/proxy-client.key: {Name:mkb2b7339b2c5bc4801b86127d693ce13ee35f2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:23:30.435152  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1008 14:23:30.435180  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1008 14:23:30.435191  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1008 14:23:30.435204  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1008 14:23:30.435216  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1008 14:23:30.435226  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1008 14:23:30.435239  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1008 14:23:30.435249  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1008 14:23:30.435302  654880 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/516787.pem (1338 bytes)
	W1008 14:23:30.435341  654880 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-513010/.minikube/certs/516787_empty.pem, impossibly tiny 0 bytes
	I1008 14:23:30.435351  654880 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 14:23:30.435377  654880 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca.pem (1078 bytes)
	I1008 14:23:30.435399  654880 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/cert.pem (1123 bytes)
	I1008 14:23:30.435419  654880 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/key.pem (1675 bytes)
	I1008 14:23:30.435456  654880 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-513010/.minikube/files/etc/ssl/certs/5167872.pem (1708 bytes)
	I1008 14:23:30.435480  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:23:30.435493  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/516787.pem -> /usr/share/ca-certificates/516787.pem
	I1008 14:23:30.435505  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/files/etc/ssl/certs/5167872.pem -> /usr/share/ca-certificates/5167872.pem
	I1008 14:23:30.436154  654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 14:23:30.454787  654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1008 14:23:30.472361  654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 14:23:30.489956  654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1008 14:23:30.507583  654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1008 14:23:30.525415  654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1008 14:23:30.543120  654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 14:23:30.560854  654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1008 14:23:30.578730  654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 14:23:30.599796  654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/certs/516787.pem --> /usr/share/ca-certificates/516787.pem (1338 bytes)
	I1008 14:23:30.617312  654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/files/etc/ssl/certs/5167872.pem --> /usr/share/ca-certificates/5167872.pem (1708 bytes)
	I1008 14:23:30.635626  654880 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 14:23:30.648875  654880 ssh_runner.go:195] Run: openssl version
	I1008 14:23:30.655674  654880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/516787.pem && ln -fs /usr/share/ca-certificates/516787.pem /etc/ssl/certs/516787.pem"
	I1008 14:23:30.664582  654880 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/516787.pem
	I1008 14:23:30.668786  654880 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 14:09 /usr/share/ca-certificates/516787.pem
	I1008 14:23:30.668853  654880 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/516787.pem
	I1008 14:23:30.703803  654880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/516787.pem /etc/ssl/certs/51391683.0"
	I1008 14:23:30.713696  654880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5167872.pem && ln -fs /usr/share/ca-certificates/5167872.pem /etc/ssl/certs/5167872.pem"
	I1008 14:23:30.722925  654880 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5167872.pem
	I1008 14:23:30.726802  654880 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 14:09 /usr/share/ca-certificates/5167872.pem
	I1008 14:23:30.726862  654880 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5167872.pem
	I1008 14:23:30.760940  654880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5167872.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 14:23:30.770017  654880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 14:23:30.778517  654880 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:23:30.782405  654880 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 14:03 /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:23:30.782465  654880 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:23:30.816706  654880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 14:23:30.825787  654880 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 14:23:30.829676  654880 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1008 14:23:30.829741  654880 kubeadm.go:400] StartCluster: {Name:multinode-439307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:multinode-439307 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 14:23:30.829825  654880 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1008 14:23:30.829872  654880 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 14:23:30.857015  654880 cri.go:89] found id: ""
	I1008 14:23:30.857078  654880 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 14:23:30.865318  654880 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 14:23:30.873182  654880 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1008 14:23:30.873235  654880 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 14:23:30.880797  654880 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 14:23:30.880817  654880 kubeadm.go:157] found existing configuration files:
	
	I1008 14:23:30.880879  654880 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 14:23:30.888347  654880 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 14:23:30.888425  654880 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 14:23:30.895504  654880 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 14:23:30.903314  654880 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 14:23:30.903371  654880 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 14:23:30.911037  654880 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 14:23:30.918990  654880 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 14:23:30.919046  654880 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 14:23:30.927124  654880 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 14:23:30.935194  654880 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 14:23:30.935282  654880 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 14:23:30.943051  654880 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1008 14:23:31.011073  654880 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1008 14:23:31.072669  654880 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 14:23:42.013295  654880 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1008 14:23:42.013386  654880 kubeadm.go:318] [preflight] Running pre-flight checks
	I1008 14:23:42.013526  654880 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1008 14:23:42.013610  654880 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1008 14:23:42.013681  654880 kubeadm.go:318] OS: Linux
	I1008 14:23:42.013738  654880 kubeadm.go:318] CGROUPS_CPU: enabled
	I1008 14:23:42.013787  654880 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1008 14:23:42.013830  654880 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1008 14:23:42.013874  654880 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1008 14:23:42.013925  654880 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1008 14:23:42.014006  654880 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1008 14:23:42.014054  654880 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1008 14:23:42.014092  654880 kubeadm.go:318] CGROUPS_IO: enabled
	I1008 14:23:42.014187  654880 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 14:23:42.014301  654880 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 14:23:42.014382  654880 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 14:23:42.014436  654880 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 14:23:42.015973  654880 out.go:252]   - Generating certificates and keys ...
	I1008 14:23:42.016057  654880 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1008 14:23:42.016112  654880 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1008 14:23:42.016189  654880 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1008 14:23:42.016266  654880 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1008 14:23:42.016339  654880 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1008 14:23:42.016411  654880 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1008 14:23:42.016496  654880 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1008 14:23:42.016630  654880 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-439307] and IPs [192.168.67.2 127.0.0.1 ::1]
	I1008 14:23:42.016681  654880 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1008 14:23:42.016787  654880 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-439307] and IPs [192.168.67.2 127.0.0.1 ::1]
	I1008 14:23:42.016843  654880 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1008 14:23:42.016903  654880 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1008 14:23:42.016945  654880 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1008 14:23:42.017040  654880 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 14:23:42.017097  654880 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 14:23:42.017144  654880 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 14:23:42.017213  654880 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 14:23:42.017286  654880 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 14:23:42.017348  654880 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 14:23:42.017478  654880 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 14:23:42.017571  654880 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 14:23:42.019103  654880 out.go:252]   - Booting up control plane ...
	I1008 14:23:42.019195  654880 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 14:23:42.019290  654880 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 14:23:42.019381  654880 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 14:23:42.019498  654880 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 14:23:42.019651  654880 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1008 14:23:42.019758  654880 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1008 14:23:42.019874  654880 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 14:23:42.019923  654880 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1008 14:23:42.020112  654880 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 14:23:42.020255  654880 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 14:23:42.020363  654880 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 500.925821ms
	I1008 14:23:42.020445  654880 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1008 14:23:42.020510  654880 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.67.2:8443/livez
	I1008 14:23:42.020603  654880 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1008 14:23:42.020682  654880 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1008 14:23:42.020747  654880 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.97144594s
	I1008 14:23:42.020832  654880 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.662946335s
	I1008 14:23:42.020919  654880 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.501501466s
	I1008 14:23:42.021101  654880 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1008 14:23:42.021289  654880 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1008 14:23:42.021368  654880 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1008 14:23:42.021621  654880 kubeadm.go:318] [mark-control-plane] Marking the node multinode-439307 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1008 14:23:42.021687  654880 kubeadm.go:318] [bootstrap-token] Using token: i5r6w0.sj0dfahq56oi5osn
	I1008 14:23:42.023115  654880 out.go:252]   - Configuring RBAC rules ...
	I1008 14:23:42.023282  654880 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1008 14:23:42.023409  654880 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1008 14:23:42.023542  654880 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1008 14:23:42.023709  654880 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1008 14:23:42.023851  654880 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1008 14:23:42.023949  654880 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1008 14:23:42.024072  654880 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1008 14:23:42.024109  654880 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1008 14:23:42.024148  654880 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1008 14:23:42.024154  654880 kubeadm.go:318] 
	I1008 14:23:42.024215  654880 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1008 14:23:42.024224  654880 kubeadm.go:318] 
	I1008 14:23:42.024309  654880 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1008 14:23:42.024319  654880 kubeadm.go:318] 
	I1008 14:23:42.024361  654880 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1008 14:23:42.024433  654880 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1008 14:23:42.024475  654880 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1008 14:23:42.024485  654880 kubeadm.go:318] 
	I1008 14:23:42.024537  654880 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1008 14:23:42.024543  654880 kubeadm.go:318] 
	I1008 14:23:42.024588  654880 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1008 14:23:42.024595  654880 kubeadm.go:318] 
	I1008 14:23:42.024647  654880 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1008 14:23:42.024727  654880 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1008 14:23:42.024793  654880 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1008 14:23:42.024806  654880 kubeadm.go:318] 
	I1008 14:23:42.024904  654880 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1008 14:23:42.025017  654880 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1008 14:23:42.025034  654880 kubeadm.go:318] 
	I1008 14:23:42.025112  654880 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token i5r6w0.sj0dfahq56oi5osn \
	I1008 14:23:42.025201  654880 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:29d08006b7495c78b5f27aaa9701a82f373226e18456de05d156e89bccfbd06f \
	I1008 14:23:42.025232  654880 kubeadm.go:318] 	--control-plane 
	I1008 14:23:42.025242  654880 kubeadm.go:318] 
	I1008 14:23:42.025327  654880 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1008 14:23:42.025334  654880 kubeadm.go:318] 
	I1008 14:23:42.025424  654880 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token i5r6w0.sj0dfahq56oi5osn \
	I1008 14:23:42.025535  654880 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:29d08006b7495c78b5f27aaa9701a82f373226e18456de05d156e89bccfbd06f 
	I1008 14:23:42.025548  654880 cni.go:84] Creating CNI manager for ""
	I1008 14:23:42.025554  654880 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1008 14:23:42.027007  654880 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1008 14:23:42.028122  654880 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1008 14:23:42.033376  654880 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1008 14:23:42.033399  654880 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1008 14:23:42.047336  654880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1008 14:23:42.257680  654880 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1008 14:23:42.257777  654880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 14:23:42.257788  654880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-439307 minikube.k8s.io/updated_at=2025_10_08T14_23_42_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9090b00fbf2832bf29026571965024d88b63d555 minikube.k8s.io/name=multinode-439307 minikube.k8s.io/primary=true
	I1008 14:23:42.267920  654880 ops.go:34] apiserver oom_adj: -16
	I1008 14:23:42.333752  654880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 14:23:42.834103  654880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 14:23:43.334513  654880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 14:23:43.834031  654880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 14:23:44.334151  654880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 14:23:44.834515  654880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 14:23:45.334213  654880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 14:23:45.834573  654880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 14:23:46.334831  654880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 14:23:46.833924  654880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 14:23:46.910005  654880 kubeadm.go:1113] duration metric: took 4.652297133s to wait for elevateKubeSystemPrivileges
	I1008 14:23:46.910044  654880 kubeadm.go:402] duration metric: took 16.080310474s to StartCluster
	I1008 14:23:46.910065  654880 settings.go:142] acquiring lock: {Name:mk8e4c0f084ac2281293848ef8bd3096692e3417 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:23:46.910151  654880 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21681-513010/kubeconfig
	I1008 14:23:46.910878  654880 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-513010/kubeconfig: {Name:mk629eb0239182a6659e3d616a150e5234772a5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:23:46.911151  654880 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1008 14:23:46.911192  654880 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1008 14:23:46.911219  654880 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 14:23:46.911355  654880 addons.go:69] Setting storage-provisioner=true in profile "multinode-439307"
	I1008 14:23:46.911395  654880 addons.go:238] Setting addon storage-provisioner=true in "multinode-439307"
	I1008 14:23:46.911396  654880 addons.go:69] Setting default-storageclass=true in profile "multinode-439307"
	I1008 14:23:46.911426  654880 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-439307"
	I1008 14:23:46.911435  654880 host.go:66] Checking if "multinode-439307" exists ...
	I1008 14:23:46.911401  654880 config.go:182] Loaded profile config "multinode-439307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1008 14:23:46.911826  654880 cli_runner.go:164] Run: docker container inspect multinode-439307 --format={{.State.Status}}
	I1008 14:23:46.912016  654880 cli_runner.go:164] Run: docker container inspect multinode-439307 --format={{.State.Status}}
	I1008 14:23:46.912657  654880 out.go:179] * Verifying Kubernetes components...
	I1008 14:23:46.917571  654880 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 14:23:46.938275  654880 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 14:23:46.938669  654880 kapi.go:59] client config for multinode-439307: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/client.crt", KeyFile:"/home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/client.key", CAFile:"/home/jenkins/minikube-integration/21681-513010/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1008 14:23:46.939674  654880 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1008 14:23:46.939699  654880 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1008 14:23:46.939706  654880 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1008 14:23:46.939712  654880 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1008 14:23:46.939717  654880 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1008 14:23:46.939726  654880 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1008 14:23:46.940295  654880 addons.go:238] Setting addon default-storageclass=true in "multinode-439307"
	I1008 14:23:46.940373  654880 host.go:66] Checking if "multinode-439307" exists ...
	I1008 14:23:46.940553  654880 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:23:46.940574  654880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 14:23:46.940644  654880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307
	I1008 14:23:46.940902  654880 cli_runner.go:164] Run: docker container inspect multinode-439307 --format={{.State.Status}}
	I1008 14:23:46.975775  654880 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33306 SSHKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307/id_rsa Username:docker}
	I1008 14:23:46.977730  654880 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 14:23:46.977762  654880 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 14:23:46.977823  654880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307
	I1008 14:23:47.019509  654880 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33306 SSHKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307/id_rsa Username:docker}
	I1008 14:23:47.062130  654880 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1008 14:23:47.114509  654880 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 14:23:47.131279  654880 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:23:47.146387  654880 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:23:47.235407  654880 start.go:976] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS's ConfigMap
	I1008 14:23:47.236068  654880 kapi.go:59] client config for multinode-439307: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/client.crt", KeyFile:"/home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/client.key", CAFile:"/home/jenkins/minikube-integration/21681-513010/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1008 14:23:47.236068  654880 kapi.go:59] client config for multinode-439307: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/client.crt", KeyFile:"/home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/client.key", CAFile:"/home/jenkins/minikube-integration/21681-513010/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1008 14:23:47.236491  654880 node_ready.go:35] waiting up to 6m0s for node "multinode-439307" to be "Ready" ...
	I1008 14:23:47.437485  654880 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1008 14:23:47.438409  654880 addons.go:514] duration metric: took 527.189163ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1008 14:23:47.740068  654880 kapi.go:214] "coredns" deployment in "kube-system" namespace and "multinode-439307" context rescaled to 1 replicas
	W1008 14:23:49.240140  654880 node_ready.go:57] node "multinode-439307" has "Ready":"False" status (will retry)
	W1008 14:23:51.240341  654880 node_ready.go:57] node "multinode-439307" has "Ready":"False" status (will retry)
	W1008 14:23:53.740468  654880 node_ready.go:57] node "multinode-439307" has "Ready":"False" status (will retry)
	W1008 14:23:55.740674  654880 node_ready.go:57] node "multinode-439307" has "Ready":"False" status (will retry)
	I1008 14:23:58.240406  654880 node_ready.go:49] node "multinode-439307" is "Ready"
	I1008 14:23:58.240442  654880 node_ready.go:38] duration metric: took 11.003905737s for node "multinode-439307" to be "Ready" ...
	I1008 14:23:58.240462  654880 api_server.go:52] waiting for apiserver process to appear ...
	I1008 14:23:58.240528  654880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:23:58.256864  654880 api_server.go:72] duration metric: took 11.345663766s to wait for apiserver process to appear ...
	I1008 14:23:58.256909  654880 api_server.go:88] waiting for apiserver healthz status ...
	I1008 14:23:58.256937  654880 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1008 14:23:58.261705  654880 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1008 14:23:58.262918  654880 api_server.go:141] control plane version: v1.34.1
	I1008 14:23:58.262945  654880 api_server.go:131] duration metric: took 6.028377ms to wait for apiserver health ...
	I1008 14:23:58.262956  654880 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 14:23:58.267800  654880 system_pods.go:59] 8 kube-system pods found
	I1008 14:23:58.267853  654880 system_pods.go:61] "coredns-66bc5c9577-llvkc" [a445b5ef-8d30-4b7c-a40f-77f2a9072e7f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 14:23:58.267870  654880 system_pods.go:61] "etcd-multinode-439307" [1989112b-ab3b-4883-9f2c-19ee41565704] Running
	I1008 14:23:58.267878  654880 system_pods.go:61] "kindnet-l6pqj" [fea0f284-17d4-438c-91a6-14831ce6ce5c] Running
	I1008 14:23:58.267884  654880 system_pods.go:61] "kube-apiserver-multinode-439307" [18f77e80-010e-4779-9028-6093a55219c5] Running
	I1008 14:23:58.267889  654880 system_pods.go:61] "kube-controller-manager-multinode-439307" [f4954c96-43a5-408b-a99e-423ab197e112] Running
	I1008 14:23:58.267903  654880 system_pods.go:61] "kube-proxy-sjzfx" [1211872c-1472-435c-a117-2656ba2fca8e] Running
	I1008 14:23:58.267908  654880 system_pods.go:61] "kube-scheduler-multinode-439307" [a940c86a-bd75-4100-86f5-0b6a53040f2b] Running
	I1008 14:23:58.267914  654880 system_pods.go:61] "storage-provisioner" [e1d410c3-de2a-4e2a-88c1-93970ce8b254] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1008 14:23:58.267923  654880 system_pods.go:74] duration metric: took 4.960123ms to wait for pod list to return data ...
	I1008 14:23:58.267935  654880 default_sa.go:34] waiting for default service account to be created ...
	I1008 14:23:58.270747  654880 default_sa.go:45] found service account: "default"
	I1008 14:23:58.270770  654880 default_sa.go:55] duration metric: took 2.828587ms for default service account to be created ...
	I1008 14:23:58.270784  654880 system_pods.go:116] waiting for k8s-apps to be running ...
	I1008 14:23:58.273850  654880 system_pods.go:86] 8 kube-system pods found
	I1008 14:23:58.273881  654880 system_pods.go:89] "coredns-66bc5c9577-llvkc" [a445b5ef-8d30-4b7c-a40f-77f2a9072e7f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 14:23:58.273886  654880 system_pods.go:89] "etcd-multinode-439307" [1989112b-ab3b-4883-9f2c-19ee41565704] Running
	I1008 14:23:58.273892  654880 system_pods.go:89] "kindnet-l6pqj" [fea0f284-17d4-438c-91a6-14831ce6ce5c] Running
	I1008 14:23:58.273896  654880 system_pods.go:89] "kube-apiserver-multinode-439307" [18f77e80-010e-4779-9028-6093a55219c5] Running
	I1008 14:23:58.273899  654880 system_pods.go:89] "kube-controller-manager-multinode-439307" [f4954c96-43a5-408b-a99e-423ab197e112] Running
	I1008 14:23:58.273903  654880 system_pods.go:89] "kube-proxy-sjzfx" [1211872c-1472-435c-a117-2656ba2fca8e] Running
	I1008 14:23:58.273911  654880 system_pods.go:89] "kube-scheduler-multinode-439307" [a940c86a-bd75-4100-86f5-0b6a53040f2b] Running
	I1008 14:23:58.273916  654880 system_pods.go:89] "storage-provisioner" [e1d410c3-de2a-4e2a-88c1-93970ce8b254] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1008 14:23:58.273944  654880 retry.go:31] will retry after 204.950572ms: missing components: kube-dns
	I1008 14:23:58.483515  654880 system_pods.go:86] 8 kube-system pods found
	I1008 14:23:58.483557  654880 system_pods.go:89] "coredns-66bc5c9577-llvkc" [a445b5ef-8d30-4b7c-a40f-77f2a9072e7f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 14:23:58.483566  654880 system_pods.go:89] "etcd-multinode-439307" [1989112b-ab3b-4883-9f2c-19ee41565704] Running
	I1008 14:23:58.483573  654880 system_pods.go:89] "kindnet-l6pqj" [fea0f284-17d4-438c-91a6-14831ce6ce5c] Running
	I1008 14:23:58.483577  654880 system_pods.go:89] "kube-apiserver-multinode-439307" [18f77e80-010e-4779-9028-6093a55219c5] Running
	I1008 14:23:58.483581  654880 system_pods.go:89] "kube-controller-manager-multinode-439307" [f4954c96-43a5-408b-a99e-423ab197e112] Running
	I1008 14:23:58.483586  654880 system_pods.go:89] "kube-proxy-sjzfx" [1211872c-1472-435c-a117-2656ba2fca8e] Running
	I1008 14:23:58.483591  654880 system_pods.go:89] "kube-scheduler-multinode-439307" [a940c86a-bd75-4100-86f5-0b6a53040f2b] Running
	I1008 14:23:58.483605  654880 system_pods.go:89] "storage-provisioner" [e1d410c3-de2a-4e2a-88c1-93970ce8b254] Running
	I1008 14:23:58.483625  654880 system_pods.go:126] duration metric: took 212.832591ms to wait for k8s-apps to be running ...
	I1008 14:23:58.483639  654880 system_svc.go:44] waiting for kubelet service to be running ....
	I1008 14:23:58.483696  654880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 14:23:58.497700  654880 system_svc.go:56] duration metric: took 14.052432ms WaitForService to wait for kubelet
	I1008 14:23:58.497735  654880 kubeadm.go:586] duration metric: took 11.586544695s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 14:23:58.497762  654880 node_conditions.go:102] verifying NodePressure condition ...
	I1008 14:23:58.501151  654880 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1008 14:23:58.501216  654880 node_conditions.go:123] node cpu capacity is 8
	I1008 14:23:58.501243  654880 node_conditions.go:105] duration metric: took 3.474604ms to run NodePressure ...
	I1008 14:23:58.501258  654880 start.go:241] waiting for startup goroutines ...
	I1008 14:23:58.501268  654880 start.go:246] waiting for cluster config update ...
	I1008 14:23:58.501283  654880 start.go:255] writing updated cluster config ...
	I1008 14:23:58.503410  654880 out.go:203] 
	I1008 14:23:58.504758  654880 config.go:182] Loaded profile config "multinode-439307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1008 14:23:58.504834  654880 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/config.json ...
	I1008 14:23:58.506429  654880 out.go:179] * Starting "multinode-439307-m02" worker node in "multinode-439307" cluster
	I1008 14:23:58.508117  654880 cache.go:123] Beginning downloading kic base image for docker with containerd
	I1008 14:23:58.509438  654880 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1008 14:23:58.510664  654880 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1008 14:23:58.510689  654880 cache.go:58] Caching tarball of preloaded images
	I1008 14:23:58.510780  654880 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 14:23:58.510807  654880 preload.go:233] Found /home/jenkins/minikube-integration/21681-513010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1008 14:23:58.510816  654880 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1008 14:23:58.510889  654880 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/config.json ...
	I1008 14:23:58.532250  654880 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1008 14:23:58.532275  654880 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1008 14:23:58.532296  654880 cache.go:232] Successfully downloaded all kic artifacts
	I1008 14:23:58.532333  654880 start.go:360] acquireMachinesLock for multinode-439307-m02: {Name:mkd110918dd178f7f1251cdb6cbe49ec290497a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 14:23:58.532447  654880 start.go:364] duration metric: took 91.76µs to acquireMachinesLock for "multinode-439307-m02"
	I1008 14:23:58.532478  654880 start.go:93] Provisioning new machine with config: &{Name:multinode-439307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:multinode-439307 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:false Worker:true}
	I1008 14:23:58.532562  654880 start.go:125] createHost starting for "m02" (driver="docker")
	I1008 14:23:58.535151  654880 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1008 14:23:58.535282  654880 start.go:159] libmachine.API.Create for "multinode-439307" (driver="docker")
	I1008 14:23:58.535317  654880 client.go:168] LocalClient.Create starting
	I1008 14:23:58.535405  654880 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca.pem
	I1008 14:23:58.535446  654880 main.go:141] libmachine: Decoding PEM data...
	I1008 14:23:58.535467  654880 main.go:141] libmachine: Parsing certificate...
	I1008 14:23:58.535539  654880 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-513010/.minikube/certs/cert.pem
	I1008 14:23:58.535570  654880 main.go:141] libmachine: Decoding PEM data...
	I1008 14:23:58.535600  654880 main.go:141] libmachine: Parsing certificate...
	I1008 14:23:58.535837  654880 cli_runner.go:164] Run: docker network inspect multinode-439307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 14:23:58.553063  654880 network_create.go:77] Found existing network {name:multinode-439307 subnet:0xc00096a0f0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 67 1] mtu:1500}
	I1008 14:23:58.553121  654880 kic.go:121] calculated static IP "192.168.67.3" for the "multinode-439307-m02" container
	I1008 14:23:58.553194  654880 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1008 14:23:58.571642  654880 cli_runner.go:164] Run: docker volume create multinode-439307-m02 --label name.minikube.sigs.k8s.io=multinode-439307-m02 --label created_by.minikube.sigs.k8s.io=true
	I1008 14:23:58.590094  654880 oci.go:103] Successfully created a docker volume multinode-439307-m02
	I1008 14:23:58.590216  654880 cli_runner.go:164] Run: docker run --rm --name multinode-439307-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-439307-m02 --entrypoint /usr/bin/test -v multinode-439307-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1008 14:23:58.980132  654880 oci.go:107] Successfully prepared a docker volume multinode-439307-m02
	I1008 14:23:58.980183  654880 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1008 14:23:58.980210  654880 kic.go:194] Starting extracting preloaded images to volume ...
	I1008 14:23:58.980284  654880 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-513010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-439307-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1008 14:24:03.452942  654880 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-513010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-439307-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.472598209s)
	I1008 14:24:03.452997  654880 kic.go:203] duration metric: took 4.472765246s to extract preloaded images to volume ...
	W1008 14:24:03.453098  654880 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1008 14:24:03.453135  654880 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1008 14:24:03.453189  654880 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1008 14:24:03.514279  654880 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-439307-m02 --name multinode-439307-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-439307-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-439307-m02 --network multinode-439307 --ip 192.168.67.3 --volume multinode-439307-m02:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1008 14:24:03.806322  654880 cli_runner.go:164] Run: docker container inspect multinode-439307-m02 --format={{.State.Running}}
	I1008 14:24:03.825192  654880 cli_runner.go:164] Run: docker container inspect multinode-439307-m02 --format={{.State.Status}}
	I1008 14:24:03.843451  654880 cli_runner.go:164] Run: docker exec multinode-439307-m02 stat /var/lib/dpkg/alternatives/iptables
	I1008 14:24:03.887312  654880 oci.go:144] the created container "multinode-439307-m02" has a running status.
	I1008 14:24:03.887351  654880 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307-m02/id_rsa...
	I1008 14:24:03.981880  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1008 14:24:03.981940  654880 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1008 14:24:04.008560  654880 cli_runner.go:164] Run: docker container inspect multinode-439307-m02 --format={{.State.Status}}
	I1008 14:24:04.028620  654880 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1008 14:24:04.028641  654880 kic_runner.go:114] Args: [docker exec --privileged multinode-439307-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1008 14:24:04.085475  654880 cli_runner.go:164] Run: docker container inspect multinode-439307-m02 --format={{.State.Status}}
	I1008 14:24:04.104162  654880 machine.go:93] provisionDockerMachine start ...
	I1008 14:24:04.104268  654880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307-m02
	I1008 14:24:04.125664  654880 main.go:141] libmachine: Using SSH client type: native
	I1008 14:24:04.126030  654880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33311 <nil> <nil>}
	I1008 14:24:04.126052  654880 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 14:24:04.126862  654880 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34090->127.0.0.1:33311: read: connection reset by peer
	I1008 14:24:07.275164  654880 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-439307-m02
	
	I1008 14:24:07.275197  654880 ubuntu.go:182] provisioning hostname "multinode-439307-m02"
	I1008 14:24:07.275268  654880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307-m02
	I1008 14:24:07.293538  654880 main.go:141] libmachine: Using SSH client type: native
	I1008 14:24:07.293764  654880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33311 <nil> <nil>}
	I1008 14:24:07.293777  654880 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-439307-m02 && echo "multinode-439307-m02" | sudo tee /etc/hostname
	I1008 14:24:07.452309  654880 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-439307-m02
	
	I1008 14:24:07.452395  654880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307-m02
	I1008 14:24:07.470682  654880 main.go:141] libmachine: Using SSH client type: native
	I1008 14:24:07.470904  654880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33311 <nil> <nil>}
	I1008 14:24:07.470926  654880 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-439307-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-439307-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-439307-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 14:24:07.619123  654880 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 14:24:07.619159  654880 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-513010/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-513010/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-513010/.minikube}
	I1008 14:24:07.619176  654880 ubuntu.go:190] setting up certificates
	I1008 14:24:07.619189  654880 provision.go:84] configureAuth start
	I1008 14:24:07.619267  654880 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-439307-m02
	I1008 14:24:07.636645  654880 provision.go:143] copyHostCerts
	I1008 14:24:07.636697  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21681-513010/.minikube/ca.pem
	I1008 14:24:07.636734  654880 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-513010/.minikube/ca.pem, removing ...
	I1008 14:24:07.636744  654880 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-513010/.minikube/ca.pem
	I1008 14:24:07.636809  654880 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-513010/.minikube/ca.pem (1078 bytes)
	I1008 14:24:07.636900  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21681-513010/.minikube/cert.pem
	I1008 14:24:07.636921  654880 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-513010/.minikube/cert.pem, removing ...
	I1008 14:24:07.636925  654880 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-513010/.minikube/cert.pem
	I1008 14:24:07.636953  654880 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-513010/.minikube/cert.pem (1123 bytes)
	I1008 14:24:07.637030  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21681-513010/.minikube/key.pem
	I1008 14:24:07.637053  654880 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-513010/.minikube/key.pem, removing ...
	I1008 14:24:07.637061  654880 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-513010/.minikube/key.pem
	I1008 14:24:07.637088  654880 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-513010/.minikube/key.pem (1675 bytes)
	I1008 14:24:07.637144  654880 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-513010/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca-key.pem org=jenkins.multinode-439307-m02 san=[127.0.0.1 192.168.67.3 localhost minikube multinode-439307-m02]
	I1008 14:24:07.912616  654880 provision.go:177] copyRemoteCerts
	I1008 14:24:07.912701  654880 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 14:24:07.912746  654880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307-m02
	I1008 14:24:07.930775  654880 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33311 SSHKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307-m02/id_rsa Username:docker}
	I1008 14:24:08.036822  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1008 14:24:08.036899  654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1008 14:24:08.057016  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1008 14:24:08.057099  654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1008 14:24:08.075825  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1008 14:24:08.075887  654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 14:24:08.094562  654880 provision.go:87] duration metric: took 475.356058ms to configureAuth
	I1008 14:24:08.094595  654880 ubuntu.go:206] setting minikube options for container-runtime
	I1008 14:24:08.094805  654880 config.go:182] Loaded profile config "multinode-439307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1008 14:24:08.094818  654880 machine.go:96] duration metric: took 3.990634645s to provisionDockerMachine
	I1008 14:24:08.094825  654880 client.go:171] duration metric: took 9.55949919s to LocalClient.Create
	I1008 14:24:08.094846  654880 start.go:167] duration metric: took 9.559564892s to libmachine.API.Create "multinode-439307"
	I1008 14:24:08.094856  654880 start.go:293] postStartSetup for "multinode-439307-m02" (driver="docker")
	I1008 14:24:08.094864  654880 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 14:24:08.094910  654880 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 14:24:08.094953  654880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307-m02
	I1008 14:24:08.112924  654880 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33311 SSHKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307-m02/id_rsa Username:docker}
	I1008 14:24:08.218693  654880 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 14:24:08.222553  654880 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 14:24:08.222590  654880 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 14:24:08.222601  654880 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-513010/.minikube/addons for local assets ...
	I1008 14:24:08.222660  654880 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-513010/.minikube/files for local assets ...
	I1008 14:24:08.222816  654880 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-513010/.minikube/files/etc/ssl/certs/5167872.pem -> 5167872.pem in /etc/ssl/certs
	I1008 14:24:08.222833  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/files/etc/ssl/certs/5167872.pem -> /etc/ssl/certs/5167872.pem
	I1008 14:24:08.222964  654880 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 14:24:08.231254  654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/files/etc/ssl/certs/5167872.pem --> /etc/ssl/certs/5167872.pem (1708 bytes)
	I1008 14:24:08.252383  654880 start.go:296] duration metric: took 157.508647ms for postStartSetup
	I1008 14:24:08.252769  654880 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-439307-m02
	I1008 14:24:08.270607  654880 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/config.json ...
	I1008 14:24:08.270881  654880 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 14:24:08.270929  654880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307-m02
	I1008 14:24:08.288967  654880 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33311 SSHKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307-m02/id_rsa Username:docker}
	I1008 14:24:08.390387  654880 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 14:24:08.395431  654880 start.go:128] duration metric: took 9.862849739s to createHost
	I1008 14:24:08.395464  654880 start.go:83] releasing machines lock for "multinode-439307-m02", held for 9.863003309s
	I1008 14:24:08.395547  654880 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-439307-m02
	I1008 14:24:08.415924  654880 out.go:179] * Found network options:
	I1008 14:24:08.417255  654880 out.go:179]   - NO_PROXY=192.168.67.2
	W1008 14:24:08.418465  654880 proxy.go:120] fail to check proxy env: Error ip not in block
	W1008 14:24:08.418511  654880 proxy.go:120] fail to check proxy env: Error ip not in block
	I1008 14:24:08.418612  654880 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1008 14:24:08.418625  654880 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 14:24:08.418653  654880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307-m02
	I1008 14:24:08.418693  654880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307-m02
	I1008 14:24:08.439832  654880 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33311 SSHKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307-m02/id_rsa Username:docker}
	I1008 14:24:08.440289  654880 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33311 SSHKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307-m02/id_rsa Username:docker}
	W1008 14:24:08.596782  654880 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 14:24:08.596862  654880 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 14:24:08.623270  654880 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 14:24:08.623295  654880 start.go:495] detecting cgroup driver to use...
	I1008 14:24:08.623333  654880 detect.go:190] detected "systemd" cgroup driver on host os
	I1008 14:24:08.623386  654880 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1008 14:24:08.638627  654880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1008 14:24:08.651897  654880 docker.go:218] disabling cri-docker service (if available) ...
	I1008 14:24:08.651966  654880 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 14:24:08.670277  654880 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 14:24:08.688725  654880 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 14:24:08.771633  654880 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 14:24:08.860938  654880 docker.go:234] disabling docker service ...
	I1008 14:24:08.861030  654880 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 14:24:08.880549  654880 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 14:24:08.894395  654880 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 14:24:08.979782  654880 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 14:24:09.065757  654880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 14:24:09.079136  654880 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 14:24:09.095338  654880 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1008 14:24:09.107275  654880 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1008 14:24:09.117636  654880 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1008 14:24:09.117701  654880 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1008 14:24:09.127943  654880 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1008 14:24:09.138714  654880 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1008 14:24:09.148727  654880 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1008 14:24:09.158882  654880 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 14:24:09.168295  654880 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1008 14:24:09.178665  654880 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1008 14:24:09.188393  654880 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1008 14:24:09.198424  654880 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 14:24:09.206454  654880 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 14:24:09.215144  654880 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 14:24:09.294927  654880 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1008 14:24:09.407140  654880 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1008 14:24:09.407220  654880 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1008 14:24:09.411681  654880 start.go:563] Will wait 60s for crictl version
	I1008 14:24:09.411754  654880 ssh_runner.go:195] Run: which crictl
	I1008 14:24:09.415949  654880 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 14:24:09.443331  654880 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.28
	RuntimeApiVersion:  v1
	I1008 14:24:09.443406  654880 ssh_runner.go:195] Run: containerd --version
	I1008 14:24:09.469419  654880 ssh_runner.go:195] Run: containerd --version
	I1008 14:24:09.496238  654880 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 1.7.28 ...
	I1008 14:24:09.497613  654880 out.go:179]   - env NO_PROXY=192.168.67.2
	I1008 14:24:09.498926  654880 cli_runner.go:164] Run: docker network inspect multinode-439307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 14:24:09.517143  654880 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I1008 14:24:09.521732  654880 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 14:24:09.533155  654880 mustload.go:65] Loading cluster: multinode-439307
	I1008 14:24:09.533379  654880 config.go:182] Loaded profile config "multinode-439307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1008 14:24:09.533664  654880 cli_runner.go:164] Run: docker container inspect multinode-439307 --format={{.State.Status}}
	I1008 14:24:09.552397  654880 host.go:66] Checking if "multinode-439307" exists ...
	I1008 14:24:09.552676  654880 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307 for IP: 192.168.67.3
	I1008 14:24:09.552690  654880 certs.go:195] generating shared ca certs ...
	I1008 14:24:09.552707  654880 certs.go:227] acquiring lock for ca certs: {Name:mk57aa9b2383fcc0908491da1ce926c707ff69a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:24:09.552825  654880 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-513010/.minikube/ca.key
	I1008 14:24:09.552870  654880 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-513010/.minikube/proxy-client-ca.key
	I1008 14:24:09.552884  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1008 14:24:09.552899  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1008 14:24:09.552911  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1008 14:24:09.552921  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1008 14:24:09.553005  654880 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/516787.pem (1338 bytes)
	W1008 14:24:09.553040  654880 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-513010/.minikube/certs/516787_empty.pem, impossibly tiny 0 bytes
	I1008 14:24:09.553048  654880 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 14:24:09.553076  654880 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca.pem (1078 bytes)
	I1008 14:24:09.553109  654880 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/cert.pem (1123 bytes)
	I1008 14:24:09.553130  654880 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/key.pem (1675 bytes)
	I1008 14:24:09.553168  654880 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-513010/.minikube/files/etc/ssl/certs/5167872.pem (1708 bytes)
	I1008 14:24:09.553193  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/516787.pem -> /usr/share/ca-certificates/516787.pem
	I1008 14:24:09.553207  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/files/etc/ssl/certs/5167872.pem -> /usr/share/ca-certificates/5167872.pem
	I1008 14:24:09.553222  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:24:09.553242  654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 14:24:09.573504  654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1008 14:24:09.592232  654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 14:24:09.610884  654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1008 14:24:09.630003  654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/certs/516787.pem --> /usr/share/ca-certificates/516787.pem (1338 bytes)
	I1008 14:24:09.653800  654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/files/etc/ssl/certs/5167872.pem --> /usr/share/ca-certificates/5167872.pem (1708 bytes)
	I1008 14:24:09.675803  654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 14:24:09.695568  654880 ssh_runner.go:195] Run: openssl version
	I1008 14:24:09.702733  654880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/516787.pem && ln -fs /usr/share/ca-certificates/516787.pem /etc/ssl/certs/516787.pem"
	I1008 14:24:09.712131  654880 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/516787.pem
	I1008 14:24:09.716287  654880 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 14:09 /usr/share/ca-certificates/516787.pem
	I1008 14:24:09.716357  654880 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/516787.pem
	I1008 14:24:09.752537  654880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/516787.pem /etc/ssl/certs/51391683.0"
	I1008 14:24:09.762173  654880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5167872.pem && ln -fs /usr/share/ca-certificates/5167872.pem /etc/ssl/certs/5167872.pem"
	I1008 14:24:09.772303  654880 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5167872.pem
	I1008 14:24:09.776649  654880 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 14:09 /usr/share/ca-certificates/5167872.pem
	I1008 14:24:09.776712  654880 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5167872.pem
	I1008 14:24:09.812619  654880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5167872.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 14:24:09.823098  654880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 14:24:09.832190  654880 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:24:09.836566  654880 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 14:03 /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:24:09.836631  654880 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:24:09.871385  654880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 14:24:09.881326  654880 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 14:24:09.885609  654880 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1008 14:24:09.885678  654880 kubeadm.go:934] updating node {m02 192.168.67.3 8443 v1.34.1 containerd false true} ...
	I1008 14:24:09.885785  654880 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-439307-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:multinode-439307 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 14:24:09.885854  654880 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 14:24:09.894180  654880 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 14:24:09.894257  654880 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1008 14:24:09.902472  654880 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I1008 14:24:09.916134  654880 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 14:24:09.931662  654880 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I1008 14:24:09.935628  654880 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 14:24:09.946151  654880 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 14:24:10.025257  654880 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 14:24:10.052607  654880 host.go:66] Checking if "multinode-439307" exists ...
	I1008 14:24:10.052868  654880 start.go:317] joinCluster: &{Name:multinode-439307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:multinode-439307 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.67.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSi
ze:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 14:24:10.052965  654880 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1008 14:24:10.053040  654880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307
	I1008 14:24:10.072940  654880 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33306 SSHKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307/id_rsa Username:docker}
	I1008 14:24:10.226647  654880 start.go:343] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.67.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:false Worker:true}
	I1008 14:24:10.226740  654880 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 4ut921.623axv37vw0z44c2 --discovery-token-ca-cert-hash sha256:29d08006b7495c78b5f27aaa9701a82f373226e18456de05d156e89bccfbd06f --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=multinode-439307-m02"
	I1008 14:24:11.499926  654880 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 4ut921.623axv37vw0z44c2 --discovery-token-ca-cert-hash sha256:29d08006b7495c78b5f27aaa9701a82f373226e18456de05d156e89bccfbd06f --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=multinode-439307-m02": (1.273161843s)
	I1008 14:24:11.500025  654880 ssh_runner.go:195] Run: sudo /bin/bash -c "systemctl daemon-reload && systemctl enable kubelet && systemctl start kubelet"
	I1008 14:24:11.684824  654880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-439307-m02 minikube.k8s.io/updated_at=2025_10_08T14_24_11_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9090b00fbf2832bf29026571965024d88b63d555 minikube.k8s.io/name=multinode-439307 minikube.k8s.io/primary=false
	I1008 14:24:11.757264  654880 start.go:319] duration metric: took 1.704388689s to joinCluster
	I1008 14:24:11.757362  654880 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.67.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:false Worker:true}
	I1008 14:24:11.757686  654880 config.go:182] Loaded profile config "multinode-439307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1008 14:24:11.759939  654880 out.go:179] * Verifying Kubernetes components...
	I1008 14:24:11.761383  654880 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 14:24:11.853236  654880 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 14:24:11.868476  654880 kapi.go:59] client config for multinode-439307: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/client.crt", KeyFile:"/home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/client.key", CAFile:"/home/jenkins/minikube-integration/21681-513010/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1008 14:24:11.868890  654880 node_ready.go:35] waiting up to 6m0s for node "multinode-439307-m02" to be "Ready" ...
	W1008 14:24:13.872620  654880 node_ready.go:57] node "multinode-439307-m02" has "Ready":"False" status (will retry)
	W1008 14:24:16.372273  654880 node_ready.go:57] node "multinode-439307-m02" has "Ready":"False" status (will retry)
	W1008 14:24:18.372477  654880 node_ready.go:57] node "multinode-439307-m02" has "Ready":"False" status (will retry)
	W1008 14:24:20.372540  654880 node_ready.go:57] node "multinode-439307-m02" has "Ready":"False" status (will retry)
	W1008 14:24:22.872160  654880 node_ready.go:57] node "multinode-439307-m02" has "Ready":"False" status (will retry)
	I1008 14:24:24.371830  654880 node_ready.go:49] node "multinode-439307-m02" is "Ready"
	I1008 14:24:24.371861  654880 node_ready.go:38] duration metric: took 12.502945701s for node "multinode-439307-m02" to be "Ready" ...
	I1008 14:24:24.371877  654880 system_svc.go:44] waiting for kubelet service to be running ....
	I1008 14:24:24.371923  654880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 14:24:24.385754  654880 system_svc.go:56] duration metric: took 13.866509ms WaitForService to wait for kubelet
	I1008 14:24:24.385788  654880 kubeadm.go:586] duration metric: took 12.628395274s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 14:24:24.385819  654880 node_conditions.go:102] verifying NodePressure condition ...
	I1008 14:24:24.388606  654880 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1008 14:24:24.388634  654880 node_conditions.go:123] node cpu capacity is 8
	I1008 14:24:24.388647  654880 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1008 14:24:24.388663  654880 node_conditions.go:123] node cpu capacity is 8
	I1008 14:24:24.388668  654880 node_conditions.go:105] duration metric: took 2.843574ms to run NodePressure ...
	I1008 14:24:24.388679  654880 start.go:241] waiting for startup goroutines ...
	I1008 14:24:24.388715  654880 start.go:255] writing updated cluster config ...
	I1008 14:24:24.389017  654880 ssh_runner.go:195] Run: rm -f paused
	I1008 14:24:24.393052  654880 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1008 14:24:24.393669  654880 kapi.go:59] client config for multinode-439307: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/client.crt", KeyFile:"/home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/client.key", CAFile:"/home/jenkins/minikube-integration/21681-513010/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1008 14:24:24.396852  654880 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-llvkc" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 14:24:24.401377  654880 pod_ready.go:94] pod "coredns-66bc5c9577-llvkc" is "Ready"
	I1008 14:24:24.401408  654880 pod_ready.go:86] duration metric: took 4.533488ms for pod "coredns-66bc5c9577-llvkc" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 14:24:24.403808  654880 pod_ready.go:83] waiting for pod "etcd-multinode-439307" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 14:24:24.407791  654880 pod_ready.go:94] pod "etcd-multinode-439307" is "Ready"
	I1008 14:24:24.407814  654880 pod_ready.go:86] duration metric: took 3.984727ms for pod "etcd-multinode-439307" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 14:24:24.410014  654880 pod_ready.go:83] waiting for pod "kube-apiserver-multinode-439307" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 14:24:24.414225  654880 pod_ready.go:94] pod "kube-apiserver-multinode-439307" is "Ready"
	I1008 14:24:24.414249  654880 pod_ready.go:86] duration metric: took 4.210762ms for pod "kube-apiserver-multinode-439307" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 14:24:24.416187  654880 pod_ready.go:83] waiting for pod "kube-controller-manager-multinode-439307" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 14:24:24.594705  654880 request.go:683] "Waited before sending request" delay="178.359169ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-439307"
	I1008 14:24:24.795096  654880 request.go:683] "Waited before sending request" delay="197.360136ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.67.2:8443/api/v1/nodes/multinode-439307"
	I1008 14:24:24.797827  654880 pod_ready.go:94] pod "kube-controller-manager-multinode-439307" is "Ready"
	I1008 14:24:24.797865  654880 pod_ready.go:86] duration metric: took 381.656304ms for pod "kube-controller-manager-multinode-439307" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 14:24:24.994392  654880 request.go:683] "Waited before sending request" delay="196.347363ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I1008 14:24:24.998079  654880 pod_ready.go:83] waiting for pod "kube-proxy-djg8q" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 14:24:25.194583  654880 request.go:683] "Waited before sending request" delay="196.367193ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-djg8q"
	I1008 14:24:25.395013  654880 request.go:683] "Waited before sending request" delay="197.398426ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.67.2:8443/api/v1/nodes/multinode-439307-m02"
	I1008 14:24:25.397572  654880 pod_ready.go:94] pod "kube-proxy-djg8q" is "Ready"
	I1008 14:24:25.397604  654880 pod_ready.go:86] duration metric: took 399.496213ms for pod "kube-proxy-djg8q" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 14:24:25.397618  654880 pod_ready.go:83] waiting for pod "kube-proxy-sjzfx" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 14:24:25.595137  654880 request.go:683] "Waited before sending request" delay="197.409064ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sjzfx"
	I1008 14:24:25.794319  654880 request.go:683] "Waited before sending request" delay="196.312301ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.67.2:8443/api/v1/nodes/multinode-439307"
	I1008 14:24:25.797345  654880 pod_ready.go:94] pod "kube-proxy-sjzfx" is "Ready"
	I1008 14:24:25.797374  654880 pod_ready.go:86] duration metric: took 399.749677ms for pod "kube-proxy-sjzfx" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 14:24:25.994958  654880 request.go:683] "Waited before sending request" delay="197.435121ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-scheduler"
	I1008 14:24:25.997593  654880 pod_ready.go:83] waiting for pod "kube-scheduler-multinode-439307" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 14:24:26.195068  654880 request.go:683] "Waited before sending request" delay="197.36444ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-439307"
	I1008 14:24:26.395200  654880 request.go:683] "Waited before sending request" delay="197.229852ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.67.2:8443/api/v1/nodes/multinode-439307"
	I1008 14:24:26.397809  654880 pod_ready.go:94] pod "kube-scheduler-multinode-439307" is "Ready"
	I1008 14:24:26.397834  654880 pod_ready.go:86] duration metric: took 400.216835ms for pod "kube-scheduler-multinode-439307" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 14:24:26.397846  654880 pod_ready.go:40] duration metric: took 2.004759901s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1008 14:24:26.444090  654880 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1008 14:24:26.446553  654880 out.go:179] * Done! kubectl is now configured to use "multinode-439307" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                        NAMESPACE
	3c70355249fcd       8c811b4aec35f       13 seconds ago       Running             busybox                   0                   f6bf249387eaa       busybox-7b57f96db7-n6rvn                   default
	4ea1a37f26c9f       52546a367cc9e       44 seconds ago       Running             coredns                   0                   d809f9cba67fd       coredns-66bc5c9577-llvkc                   kube-system
	1ab8655881512       6e38f40d628db       44 seconds ago       Running             storage-provisioner       0                   e2da4323cdf8d       storage-provisioner                        kube-system
	eb44427aa7b68       409467f978b4a       55 seconds ago       Running             kindnet-cni               0                   470cdd7a7920c       kindnet-l6pqj                              kube-system
	70d5305f9c0f1       fc25172553d79       55 seconds ago       Running             kube-proxy                0                   734361aeebab7       kube-proxy-sjzfx                           kube-system
	c5ef7b607ae59       5f1f5298c888d       About a minute ago   Running             etcd                      0                   627ec39143d66       etcd-multinode-439307                      kube-system
	7bc5378271f6e       c80c8dbafe7dd       About a minute ago   Running             kube-controller-manager   0                   887929a790edf       kube-controller-manager-multinode-439307   kube-system
	4023d943508d7       7dd6aaa1717ab       About a minute ago   Running             kube-scheduler            0                   9fb7378888c6d       kube-scheduler-multinode-439307            kube-system
	a75297140a138       c3994bc696102       About a minute ago   Running             kube-apiserver            0                   db9ed929a6258       kube-apiserver-multinode-439307            kube-system
	
	
	==> containerd <==
	Oct 08 14:23:58 multinode-439307 containerd[665]: time="2025-10-08T14:23:58.109269861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-llvkc,Uid:a445b5ef-8d30-4b7c-a40f-77f2a9072e7f,Namespace:kube-system,Attempt:0,}"
	Oct 08 14:23:58 multinode-439307 containerd[665]: time="2025-10-08T14:23:58.111413366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:e1d410c3-de2a-4e2a-88c1-93970ce8b254,Namespace:kube-system,Attempt:0,}"
	Oct 08 14:23:58 multinode-439307 containerd[665]: time="2025-10-08T14:23:58.205307729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:e1d410c3-de2a-4e2a-88c1-93970ce8b254,Namespace:kube-system,Attempt:0,} returns sandbox id \"e2da4323cdf8d7d7b3931ff8c336a482dc0cc57329950586094267711d1b74ae\""
	Oct 08 14:23:58 multinode-439307 containerd[665]: time="2025-10-08T14:23:58.210861374Z" level=info msg="CreateContainer within sandbox \"e2da4323cdf8d7d7b3931ff8c336a482dc0cc57329950586094267711d1b74ae\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	Oct 08 14:23:58 multinode-439307 containerd[665]: time="2025-10-08T14:23:58.212899403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-llvkc,Uid:a445b5ef-8d30-4b7c-a40f-77f2a9072e7f,Namespace:kube-system,Attempt:0,} returns sandbox id \"d809f9cba67fd85761b8285b149afbb37772e4b710c2445e0b7d5cf977684afa\""
	Oct 08 14:23:58 multinode-439307 containerd[665]: time="2025-10-08T14:23:58.217454502Z" level=info msg="CreateContainer within sandbox \"d809f9cba67fd85761b8285b149afbb37772e4b710c2445e0b7d5cf977684afa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Oct 08 14:23:58 multinode-439307 containerd[665]: time="2025-10-08T14:23:58.224178373Z" level=info msg="CreateContainer within sandbox \"e2da4323cdf8d7d7b3931ff8c336a482dc0cc57329950586094267711d1b74ae\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"1ab8655881512f6c4b619c636eee1de03f57f734cce6fdc4604bae23d671ab17\""
	Oct 08 14:23:58 multinode-439307 containerd[665]: time="2025-10-08T14:23:58.224770434Z" level=info msg="StartContainer for \"1ab8655881512f6c4b619c636eee1de03f57f734cce6fdc4604bae23d671ab17\""
	Oct 08 14:23:58 multinode-439307 containerd[665]: time="2025-10-08T14:23:58.229804859Z" level=info msg="CreateContainer within sandbox \"d809f9cba67fd85761b8285b149afbb37772e4b710c2445e0b7d5cf977684afa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4ea1a37f26c9f5494351b59d206a47409262ce838a0524bce03e8da1debb8116\""
	Oct 08 14:23:58 multinode-439307 containerd[665]: time="2025-10-08T14:23:58.230405753Z" level=info msg="StartContainer for \"4ea1a37f26c9f5494351b59d206a47409262ce838a0524bce03e8da1debb8116\""
	Oct 08 14:23:58 multinode-439307 containerd[665]: time="2025-10-08T14:23:58.288088455Z" level=info msg="StartContainer for \"1ab8655881512f6c4b619c636eee1de03f57f734cce6fdc4604bae23d671ab17\" returns successfully"
	Oct 08 14:23:58 multinode-439307 containerd[665]: time="2025-10-08T14:23:58.302363146Z" level=info msg="StartContainer for \"4ea1a37f26c9f5494351b59d206a47409262ce838a0524bce03e8da1debb8116\" returns successfully"
	Oct 08 14:24:27 multinode-439307 containerd[665]: time="2025-10-08T14:24:27.431929318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-7b57f96db7-n6rvn,Uid:48d40e87-f7eb-4886-84ea-0d1c344bcef4,Namespace:default,Attempt:0,}"
	Oct 08 14:24:27 multinode-439307 containerd[665]: time="2025-10-08T14:24:27.524294263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-7b57f96db7-n6rvn,Uid:48d40e87-f7eb-4886-84ea-0d1c344bcef4,Namespace:default,Attempt:0,} returns sandbox id \"f6bf249387eaaf48dfa1cfac0cb2eb3646b9e2075be5c9397d97b91ceb9f7c69\""
	Oct 08 14:24:27 multinode-439307 containerd[665]: time="2025-10-08T14:24:27.526837991Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28\""
	Oct 08 14:24:28 multinode-439307 containerd[665]: time="2025-10-08T14:24:28.786377714Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Oct 08 14:24:28 multinode-439307 containerd[665]: time="2025-10-08T14:24:28.787080125Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28: active requests=0, bytes read=727667"
	Oct 08 14:24:28 multinode-439307 containerd[665]: time="2025-10-08T14:24:28.788316089Z" level=info msg="ImageCreate event name:\"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Oct 08 14:24:28 multinode-439307 containerd[665]: time="2025-10-08T14:24:28.790697278Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Oct 08 14:24:28 multinode-439307 containerd[665]: time="2025-10-08T14:24:28.791452472Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28\" with image id \"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\", repo tag \"gcr.io/k8s-minikube/busybox:1.28\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12\", size \"725911\" in 1.264570757s"
	Oct 08 14:24:28 multinode-439307 containerd[665]: time="2025-10-08T14:24:28.791498077Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28\" returns image reference \"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\""
	Oct 08 14:24:28 multinode-439307 containerd[665]: time="2025-10-08T14:24:28.798301525Z" level=info msg="CreateContainer within sandbox \"f6bf249387eaaf48dfa1cfac0cb2eb3646b9e2075be5c9397d97b91ceb9f7c69\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Oct 08 14:24:28 multinode-439307 containerd[665]: time="2025-10-08T14:24:28.808156445Z" level=info msg="CreateContainer within sandbox \"f6bf249387eaaf48dfa1cfac0cb2eb3646b9e2075be5c9397d97b91ceb9f7c69\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"3c70355249fcd2e6ee6d118b75c6bc3546058b18b6aeb6dce0b1b702d096ac47\""
	Oct 08 14:24:28 multinode-439307 containerd[665]: time="2025-10-08T14:24:28.809029634Z" level=info msg="StartContainer for \"3c70355249fcd2e6ee6d118b75c6bc3546058b18b6aeb6dce0b1b702d096ac47\""
	Oct 08 14:24:28 multinode-439307 containerd[665]: time="2025-10-08T14:24:28.869302769Z" level=info msg="StartContainer for \"3c70355249fcd2e6ee6d118b75c6bc3546058b18b6aeb6dce0b1b702d096ac47\" returns successfully"
	
	
	==> coredns [4ea1a37f26c9f5494351b59d206a47409262ce838a0524bce03e8da1debb8116] <==
	[INFO] 10.244.1.2:46440 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125237s
	[INFO] 10.244.0.3:32802 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000173381s
	[INFO] 10.244.0.3:52099 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000116887s
	[INFO] 10.244.0.3:55009 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000158031s
	[INFO] 10.244.0.3:52826 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00015101s
	[INFO] 10.244.0.3:36042 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.00007137s
	[INFO] 10.244.0.3:51029 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001339s
	[INFO] 10.244.0.3:58795 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000130735s
	[INFO] 10.244.0.3:47967 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000075412s
	[INFO] 10.244.1.2:39882 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00025259s
	[INFO] 10.244.1.2:52814 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000218308s
	[INFO] 10.244.1.2:37521 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000148655s
	[INFO] 10.244.1.2:42486 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00011547s
	[INFO] 10.244.0.3:44143 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169188s
	[INFO] 10.244.0.3:48380 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000235742s
	[INFO] 10.244.0.3:43850 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000155536s
	[INFO] 10.244.0.3:49494 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000093677s
	[INFO] 10.244.1.2:59241 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000198155s
	[INFO] 10.244.1.2:55245 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000162203s
	[INFO] 10.244.1.2:33545 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000110828s
	[INFO] 10.244.1.2:36918 - 5 "PTR IN 1.67.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000139385s
	[INFO] 10.244.0.3:59030 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000160386s
	[INFO] 10.244.0.3:44681 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000139946s
	[INFO] 10.244.0.3:37620 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000098444s
	[INFO] 10.244.0.3:59659 - 5 "PTR IN 1.67.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000066524s
	
	
	==> describe nodes <==
	Name:               multinode-439307
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-439307
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9090b00fbf2832bf29026571965024d88b63d555
	                    minikube.k8s.io/name=multinode-439307
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_08T14_23_42_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 08 Oct 2025 14:23:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-439307
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 08 Oct 2025 14:24:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 08 Oct 2025 14:23:57 +0000   Wed, 08 Oct 2025 14:23:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 08 Oct 2025 14:23:57 +0000   Wed, 08 Oct 2025 14:23:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 08 Oct 2025 14:23:57 +0000   Wed, 08 Oct 2025 14:23:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 08 Oct 2025 14:23:57 +0000   Wed, 08 Oct 2025 14:23:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    multinode-439307
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 56d3e6862fcc45b48f25bde7f561b1d7
	  System UUID:                3ecc1d83-e69e-4927-aebb-a9dcae9475e4
	  Boot ID:                    5fdbec2a-e754-47ce-9745-1553567d6c31
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.28
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-n6rvn                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         15s
	  kube-system                 coredns-66bc5c9577-llvkc                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     56s
	  kube-system                 etcd-multinode-439307                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         61s
	  kube-system                 kindnet-l6pqj                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      56s
	  kube-system                 kube-apiserver-multinode-439307             250m (3%)     0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-controller-manager-multinode-439307    200m (2%)     0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-proxy-sjzfx                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 kube-scheduler-multinode-439307             100m (1%)     0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 55s                kube-proxy       
	  Normal  Starting                 66s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  66s (x8 over 66s)  kubelet          Node multinode-439307 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    66s (x8 over 66s)  kubelet          Node multinode-439307 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     66s (x7 over 66s)  kubelet          Node multinode-439307 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  66s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 61s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  61s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  61s                kubelet          Node multinode-439307 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s                kubelet          Node multinode-439307 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s                kubelet          Node multinode-439307 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           57s                node-controller  Node multinode-439307 event: Registered Node multinode-439307 in Controller
	  Normal  NodeReady                45s                kubelet          Node multinode-439307 status is now: NodeReady
	
	
	Name:               multinode-439307-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-439307-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9090b00fbf2832bf29026571965024d88b63d555
	                    minikube.k8s.io/name=multinode-439307
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_08T14_24_11_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 08 Oct 2025 14:24:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-439307-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 08 Oct 2025 14:24:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 08 Oct 2025 14:24:41 +0000   Wed, 08 Oct 2025 14:24:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 08 Oct 2025 14:24:41 +0000   Wed, 08 Oct 2025 14:24:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 08 Oct 2025 14:24:41 +0000   Wed, 08 Oct 2025 14:24:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 08 Oct 2025 14:24:41 +0000   Wed, 08 Oct 2025 14:24:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.3
	  Hostname:    multinode-439307-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 8b74ec156e614a3fac7c415130ea0397
	  System UUID:                ab0bc412-83f7-4153-b57d-32510d60dd56
	  Boot ID:                    5fdbec2a-e754-47ce-9745-1553567d6c31
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.28
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-9qspn    0 (0%)        0 (0%)      0 (0%)           0 (0%)         15s
	  kube-system                 kindnet-wch5j               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      31s
	  kube-system                 kube-proxy-djg8q            0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%)  100m (1%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28s                kube-proxy       
	  Normal  NodeHasSufficientMemory  31s (x3 over 31s)  kubelet          Node multinode-439307-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s (x3 over 31s)  kubelet          Node multinode-439307-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s (x3 over 31s)  kubelet          Node multinode-439307-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  31s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           27s                node-controller  Node multinode-439307-m02 event: Registered Node multinode-439307-m02 in Controller
	  Normal  NodeReady                18s                kubelet          Node multinode-439307-m02 status is now: NodeReady
	
	
	Name:               multinode-439307-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-439307-m03
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 08 Oct 2025 14:24:41 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:              Failed to get lease: leases.coordination.k8s.io "multinode-439307-m03" not found
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 08 Oct 2025 14:24:41 +0000   Wed, 08 Oct 2025 14:24:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 08 Oct 2025 14:24:41 +0000   Wed, 08 Oct 2025 14:24:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 08 Oct 2025 14:24:41 +0000   Wed, 08 Oct 2025 14:24:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 08 Oct 2025 14:24:41 +0000   Wed, 08 Oct 2025 14:24:41 +0000   KubeletNotReady              [container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized, CSINode is not yet initialized]
	Addresses:
	  InternalIP:  192.168.67.4
	  Hostname:    multinode-439307-m03
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 cb5019628fa5415a9a6de65b61b0aa10
	  System UUID:                4c1a693e-f511-45ca-9c03-2a547007f3cb
	  Boot ID:                    5fdbec2a-e754-47ce-9745-1553567d6c31
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.28
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-58vm5       100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      1s
	  kube-system                 kube-proxy-fs89g    0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%)  100m (1%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age              From     Message
	  ----    ------                   ----             ----     -------
	  Normal  NodeHasSufficientMemory  1s (x3 over 1s)  kubelet  Node multinode-439307-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    1s (x3 over 1s)  kubelet  Node multinode-439307-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     1s (x3 over 1s)  kubelet  Node multinode-439307-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  1s               kubelet  Updated Node Allocatable limit across pods
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e6 26 b3 37 bf 19 08 06
	[  +0.000410] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3e 1c 28 4b 91 c9 08 06
	[Oct 8 13:59] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff aa 16 3f fe bd b6 08 06
	[  +0.044604] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ea 40 7d d0 6d a6 08 06
	[ +10.339808] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff f2 86 26 6c 97 dc 08 06
	[  +2.975774] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 2a 61 e9 d6 10 e3 08 06
	[  +0.101555] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ea fa 29 51 08 ac 08 06
	[ +30.965246] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e 37 46 57 22 c1 08 06
	[  +0.000341] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ea 40 7d d0 6d a6 08 06
	[Oct 8 14:00] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 9c 9c 72 fb 11 08 06
	[  +0.000628] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea fa 29 51 08 ac 08 06
	[  +2.730130] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d6 a4 4e 39 b9 db 08 06
	[  +0.000456] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 86 26 6c 97 dc 08 06
	
	
	==> etcd [c5ef7b607ae59f8f6aeebf4ab11b5560d14e184780133f6a6973d2dc59d69c2c] <==
	{"level":"warn","ts":"2025-10-08T14:23:38.147870Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T14:23:38.154263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T14:23:38.163022Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T14:23:38.169346Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T14:23:38.175903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T14:23:38.182506Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T14:23:38.188820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T14:23:38.195857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T14:23:38.202025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T14:23:38.208239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T14:23:38.221089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T14:23:38.228181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T14:23:38.235952Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T14:23:38.249350Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T14:23:38.257305Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T14:23:38.263748Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T14:23:38.269837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T14:23:38.276382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T14:23:38.282566Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T14:23:38.288832Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T14:23:38.302605Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T14:23:38.308859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T14:23:38.315043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T14:23:38.361302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T14:24:35.554339Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"132.090033ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2289968003519192253 > lease_revoke:<id:1fc799c434c59c06>","response":"size:29"}
	
	
	==> kernel <==
	 14:24:42 up  2:07,  0 user,  load average: 1.10, 1.50, 1.84
	Linux multinode-439307 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [eb44427aa7b68d0cb5246a5d10b69e69a310ad7dbe803f32fbfe929362b00e9b] <==
	time="2025-10-08T14:23:47Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1008 14:23:47.690096       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1008 14:23:47.690125       1 controller.go:381] "Waiting for informer caches to sync"
	I1008 14:23:47.690138       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1008 14:23:47.690279       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1008 14:23:48.090594       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1008 14:23:48.090626       1 metrics.go:72] Registering metrics
	I1008 14:23:48.090682       1 controller.go:711] "Syncing nftables rules"
	I1008 14:23:57.691180       1 main.go:297] Handling node with IPs: map[192.168.67.2:{}]
	I1008 14:23:57.691253       1 main.go:301] handling current node
	I1008 14:24:07.697046       1 main.go:297] Handling node with IPs: map[192.168.67.2:{}]
	I1008 14:24:07.697089       1 main.go:301] handling current node
	I1008 14:24:17.690952       1 main.go:297] Handling node with IPs: map[192.168.67.3:{}]
	I1008 14:24:17.691012       1 main.go:324] Node multinode-439307-m02 has CIDR [10.244.1.0/24] 
	I1008 14:24:17.691311       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.67.3 Flags: [] Table: 0 Realm: 0} 
	I1008 14:24:17.691488       1 main.go:297] Handling node with IPs: map[192.168.67.2:{}]
	I1008 14:24:17.691506       1 main.go:301] handling current node
	I1008 14:24:27.690151       1 main.go:297] Handling node with IPs: map[192.168.67.2:{}]
	I1008 14:24:27.690209       1 main.go:301] handling current node
	I1008 14:24:27.690224       1 main.go:297] Handling node with IPs: map[192.168.67.3:{}]
	I1008 14:24:27.690228       1 main.go:324] Node multinode-439307-m02 has CIDR [10.244.1.0/24] 
	I1008 14:24:37.696064       1 main.go:297] Handling node with IPs: map[192.168.67.2:{}]
	I1008 14:24:37.696102       1 main.go:301] handling current node
	I1008 14:24:37.696118       1 main.go:297] Handling node with IPs: map[192.168.67.3:{}]
	I1008 14:24:37.696123       1 main.go:324] Node multinode-439307-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [a75297140a13849f0bbb8691fcb7ec90b635a193300494f88d6ee8bb6961ae9a] <==
	I1008 14:23:39.722440       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1008 14:23:40.200145       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1008 14:23:40.237907       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1008 14:23:40.325514       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1008 14:23:40.331916       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.67.2]
	I1008 14:23:40.333100       1 controller.go:667] quota admission added evaluator for: endpoints
	I1008 14:23:40.337576       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1008 14:23:40.736405       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1008 14:23:41.412400       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1008 14:23:41.423756       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1008 14:23:41.431519       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1008 14:23:46.190246       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1008 14:23:46.194096       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1008 14:23:46.390973       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1008 14:23:46.839639       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1008 14:24:29.836739       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:60732: use of closed network connection
	E1008 14:24:30.001569       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:60754: use of closed network connection
	E1008 14:24:30.207099       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:60770: use of closed network connection
	E1008 14:24:30.374520       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:60784: use of closed network connection
	E1008 14:24:30.535911       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:60810: use of closed network connection
	E1008 14:24:30.697115       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:60834: use of closed network connection
	E1008 14:24:30.974454       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:60862: use of closed network connection
	E1008 14:24:31.133503       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:60876: use of closed network connection
	E1008 14:24:31.290639       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:60900: use of closed network connection
	E1008 14:24:31.447827       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:60924: use of closed network connection
	
	
	==> kube-controller-manager [7bc5378271f6ec3084def02b6c09453b95f33b6c40f004a8ecd7ddaca4ee2e23] <==
	I1008 14:23:45.735304       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1008 14:23:45.736140       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1008 14:23:45.736180       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1008 14:23:45.736201       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1008 14:23:45.736257       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1008 14:23:45.736247       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1008 14:23:45.736431       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1008 14:23:45.736317       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1008 14:23:45.736499       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1008 14:23:45.736731       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1008 14:23:45.740043       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1008 14:23:45.740066       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1008 14:23:45.742483       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1008 14:23:45.745803       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1008 14:23:45.752135       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1008 14:23:45.757502       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1008 14:23:45.762890       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1008 14:24:00.736958       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I1008 14:24:11.255158       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-439307-m02\" does not exist"
	I1008 14:24:11.267044       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-439307-m02" podCIDRs=["10.244.1.0/24"]
	I1008 14:24:15.739150       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-439307-m02"
	I1008 14:24:24.240565       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-439307-m02"
	I1008 14:24:41.773017       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-439307-m02"
	I1008 14:24:41.773446       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-439307-m03\" does not exist"
	I1008 14:24:41.785959       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-439307-m03" podCIDRs=["10.244.2.0/24"]
	
	
	==> kube-proxy [70d5305f9c0f1e614d86457efd99bfbb2a639a470f299474edd5bdee53d17425] <==
	I1008 14:23:46.942146       1 server_linux.go:53] "Using iptables proxy"
	I1008 14:23:47.054382       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1008 14:23:47.154807       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1008 14:23:47.154859       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.67.2"]
	E1008 14:23:47.154951       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1008 14:23:47.180008       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1008 14:23:47.180073       1 server_linux.go:132] "Using iptables Proxier"
	I1008 14:23:47.186411       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1008 14:23:47.187151       1 server.go:527] "Version info" version="v1.34.1"
	I1008 14:23:47.187189       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 14:23:47.189577       1 config.go:200] "Starting service config controller"
	I1008 14:23:47.189598       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1008 14:23:47.189627       1 config.go:106] "Starting endpoint slice config controller"
	I1008 14:23:47.189632       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1008 14:23:47.189645       1 config.go:403] "Starting serviceCIDR config controller"
	I1008 14:23:47.189650       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1008 14:23:47.189881       1 config.go:309] "Starting node config controller"
	I1008 14:23:47.189888       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1008 14:23:47.189894       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1008 14:23:47.290449       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1008 14:23:47.290467       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1008 14:23:47.290469       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [4023d943508d78a5c887a79feaa82148d136b6c293acc44418506ac640d4c238] <==
	E1008 14:23:38.760223       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1008 14:23:38.760331       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1008 14:23:38.760371       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1008 14:23:38.760376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1008 14:23:38.760448       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1008 14:23:38.760458       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1008 14:23:38.760452       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1008 14:23:38.760534       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1008 14:23:38.760564       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1008 14:23:38.760602       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1008 14:23:38.760684       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1008 14:23:38.760689       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1008 14:23:38.760727       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1008 14:23:38.760774       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1008 14:23:38.760787       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1008 14:23:39.582212       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1008 14:23:39.649725       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1008 14:23:39.743595       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1008 14:23:39.755043       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1008 14:23:39.833591       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1008 14:23:39.896154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1008 14:23:39.927234       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1008 14:23:39.948345       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1008 14:23:39.962957       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1008 14:23:41.659308       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 08 14:23:42 multinode-439307 kubelet[1486]: I1008 14:23:42.315174    1486 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-439307" podStartSLOduration=1.3151361129999999 podStartE2EDuration="1.315136113s" podCreationTimestamp="2025-10-08 14:23:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:23:42.304935753 +0000 UTC m=+1.126741170" watchObservedRunningTime="2025-10-08 14:23:42.315136113 +0000 UTC m=+1.136941525"
	Oct 08 14:23:42 multinode-439307 kubelet[1486]: I1008 14:23:42.326049    1486 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-439307" podStartSLOduration=1.3260291149999999 podStartE2EDuration="1.326029115s" podCreationTimestamp="2025-10-08 14:23:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:23:42.315323385 +0000 UTC m=+1.137128872" watchObservedRunningTime="2025-10-08 14:23:42.326029115 +0000 UTC m=+1.147834531"
	Oct 08 14:23:42 multinode-439307 kubelet[1486]: I1008 14:23:42.326174    1486 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-multinode-439307" podStartSLOduration=1.326165456 podStartE2EDuration="1.326165456s" podCreationTimestamp="2025-10-08 14:23:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:23:42.325917199 +0000 UTC m=+1.147722617" watchObservedRunningTime="2025-10-08 14:23:42.326165456 +0000 UTC m=+1.147970871"
	Oct 08 14:23:42 multinode-439307 kubelet[1486]: I1008 14:23:42.352482    1486 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-multinode-439307" podStartSLOduration=1.352459294 podStartE2EDuration="1.352459294s" podCreationTimestamp="2025-10-08 14:23:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:23:42.33893649 +0000 UTC m=+1.160741907" watchObservedRunningTime="2025-10-08 14:23:42.352459294 +0000 UTC m=+1.174264711"
	Oct 08 14:23:45 multinode-439307 kubelet[1486]: I1008 14:23:45.703138    1486 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 08 14:23:45 multinode-439307 kubelet[1486]: I1008 14:23:45.703898    1486 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 08 14:23:46 multinode-439307 kubelet[1486]: I1008 14:23:46.481639    1486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fea0f284-17d4-438c-91a6-14831ce6ce5c-lib-modules\") pod \"kindnet-l6pqj\" (UID: \"fea0f284-17d4-438c-91a6-14831ce6ce5c\") " pod="kube-system/kindnet-l6pqj"
	Oct 08 14:23:46 multinode-439307 kubelet[1486]: I1008 14:23:46.481688    1486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jstr\" (UniqueName: \"kubernetes.io/projected/1211872c-1472-435c-a117-2656ba2fca8e-kube-api-access-6jstr\") pod \"kube-proxy-sjzfx\" (UID: \"1211872c-1472-435c-a117-2656ba2fca8e\") " pod="kube-system/kube-proxy-sjzfx"
	Oct 08 14:23:46 multinode-439307 kubelet[1486]: I1008 14:23:46.481713    1486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/fea0f284-17d4-438c-91a6-14831ce6ce5c-cni-cfg\") pod \"kindnet-l6pqj\" (UID: \"fea0f284-17d4-438c-91a6-14831ce6ce5c\") " pod="kube-system/kindnet-l6pqj"
	Oct 08 14:23:46 multinode-439307 kubelet[1486]: I1008 14:23:46.481727    1486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1211872c-1472-435c-a117-2656ba2fca8e-xtables-lock\") pod \"kube-proxy-sjzfx\" (UID: \"1211872c-1472-435c-a117-2656ba2fca8e\") " pod="kube-system/kube-proxy-sjzfx"
	Oct 08 14:23:46 multinode-439307 kubelet[1486]: I1008 14:23:46.481745    1486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fea0f284-17d4-438c-91a6-14831ce6ce5c-xtables-lock\") pod \"kindnet-l6pqj\" (UID: \"fea0f284-17d4-438c-91a6-14831ce6ce5c\") " pod="kube-system/kindnet-l6pqj"
	Oct 08 14:23:46 multinode-439307 kubelet[1486]: I1008 14:23:46.481763    1486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1211872c-1472-435c-a117-2656ba2fca8e-lib-modules\") pod \"kube-proxy-sjzfx\" (UID: \"1211872c-1472-435c-a117-2656ba2fca8e\") " pod="kube-system/kube-proxy-sjzfx"
	Oct 08 14:23:46 multinode-439307 kubelet[1486]: I1008 14:23:46.481786    1486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb5rk\" (UniqueName: \"kubernetes.io/projected/fea0f284-17d4-438c-91a6-14831ce6ce5c-kube-api-access-nb5rk\") pod \"kindnet-l6pqj\" (UID: \"fea0f284-17d4-438c-91a6-14831ce6ce5c\") " pod="kube-system/kindnet-l6pqj"
	Oct 08 14:23:46 multinode-439307 kubelet[1486]: I1008 14:23:46.481806    1486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1211872c-1472-435c-a117-2656ba2fca8e-kube-proxy\") pod \"kube-proxy-sjzfx\" (UID: \"1211872c-1472-435c-a117-2656ba2fca8e\") " pod="kube-system/kube-proxy-sjzfx"
	Oct 08 14:23:47 multinode-439307 kubelet[1486]: I1008 14:23:47.299755    1486 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sjzfx" podStartSLOduration=1.299713567 podStartE2EDuration="1.299713567s" podCreationTimestamp="2025-10-08 14:23:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:23:47.299676523 +0000 UTC m=+6.121481941" watchObservedRunningTime="2025-10-08 14:23:47.299713567 +0000 UTC m=+6.121518985"
	Oct 08 14:23:48 multinode-439307 kubelet[1486]: I1008 14:23:48.313744    1486 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-l6pqj" podStartSLOduration=2.313719899 podStartE2EDuration="2.313719899s" podCreationTimestamp="2025-10-08 14:23:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:23:48.313549742 +0000 UTC m=+7.135355171" watchObservedRunningTime="2025-10-08 14:23:48.313719899 +0000 UTC m=+7.135525315"
	Oct 08 14:23:57 multinode-439307 kubelet[1486]: I1008 14:23:57.772604    1486 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 08 14:23:57 multinode-439307 kubelet[1486]: I1008 14:23:57.853219    1486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rw6pb\" (UniqueName: \"kubernetes.io/projected/a445b5ef-8d30-4b7c-a40f-77f2a9072e7f-kube-api-access-rw6pb\") pod \"coredns-66bc5c9577-llvkc\" (UID: \"a445b5ef-8d30-4b7c-a40f-77f2a9072e7f\") " pod="kube-system/coredns-66bc5c9577-llvkc"
	Oct 08 14:23:57 multinode-439307 kubelet[1486]: I1008 14:23:57.853273    1486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e1d410c3-de2a-4e2a-88c1-93970ce8b254-tmp\") pod \"storage-provisioner\" (UID: \"e1d410c3-de2a-4e2a-88c1-93970ce8b254\") " pod="kube-system/storage-provisioner"
	Oct 08 14:23:57 multinode-439307 kubelet[1486]: I1008 14:23:57.853308    1486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlb24\" (UniqueName: \"kubernetes.io/projected/e1d410c3-de2a-4e2a-88c1-93970ce8b254-kube-api-access-nlb24\") pod \"storage-provisioner\" (UID: \"e1d410c3-de2a-4e2a-88c1-93970ce8b254\") " pod="kube-system/storage-provisioner"
	Oct 08 14:23:57 multinode-439307 kubelet[1486]: I1008 14:23:57.853418    1486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a445b5ef-8d30-4b7c-a40f-77f2a9072e7f-config-volume\") pod \"coredns-66bc5c9577-llvkc\" (UID: \"a445b5ef-8d30-4b7c-a40f-77f2a9072e7f\") " pod="kube-system/coredns-66bc5c9577-llvkc"
	Oct 08 14:23:58 multinode-439307 kubelet[1486]: I1008 14:23:58.331131    1486 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=11.331114913 podStartE2EDuration="11.331114913s" podCreationTimestamp="2025-10-08 14:23:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:23:58.330824691 +0000 UTC m=+17.152630110" watchObservedRunningTime="2025-10-08 14:23:58.331114913 +0000 UTC m=+17.152920351"
	Oct 08 14:23:58 multinode-439307 kubelet[1486]: I1008 14:23:58.344349    1486 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-llvkc" podStartSLOduration=12.344324469 podStartE2EDuration="12.344324469s" podCreationTimestamp="2025-10-08 14:23:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:23:58.34404488 +0000 UTC m=+17.165850298" watchObservedRunningTime="2025-10-08 14:23:58.344324469 +0000 UTC m=+17.166129896"
	Oct 08 14:24:27 multinode-439307 kubelet[1486]: I1008 14:24:27.247091    1486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g9nr\" (UniqueName: \"kubernetes.io/projected/48d40e87-f7eb-4886-84ea-0d1c344bcef4-kube-api-access-9g9nr\") pod \"busybox-7b57f96db7-n6rvn\" (UID: \"48d40e87-f7eb-4886-84ea-0d1c344bcef4\") " pod="default/busybox-7b57f96db7-n6rvn"
	Oct 08 14:24:29 multinode-439307 kubelet[1486]: I1008 14:24:29.399108    1486 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-7b57f96db7-n6rvn" podStartSLOduration=1.132795141 podStartE2EDuration="2.399085602s" podCreationTimestamp="2025-10-08 14:24:27 +0000 UTC" firstStartedPulling="2025-10-08 14:24:27.526216854 +0000 UTC m=+46.348022263" lastFinishedPulling="2025-10-08 14:24:28.792507312 +0000 UTC m=+47.614312724" observedRunningTime="2025-10-08 14:24:29.398743884 +0000 UTC m=+48.220549303" watchObservedRunningTime="2025-10-08 14:24:29.399085602 +0000 UTC m=+48.220891019"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-439307 -n multinode-439307
helpers_test.go:269: (dbg) Run:  kubectl --context multinode-439307 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: kindnet-58vm5 kube-proxy-fs89g
helpers_test.go:282: ======> post-mortem[TestMultiNode/serial/AddNode]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context multinode-439307 describe pod kindnet-58vm5 kube-proxy-fs89g
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context multinode-439307 describe pod kindnet-58vm5 kube-proxy-fs89g: exit status 1 (62.0269ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "kindnet-58vm5" not found
	Error from server (NotFound): pods "kube-proxy-fs89g" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context multinode-439307 describe pod kindnet-58vm5 kube-proxy-fs89g: exit status 1
--- FAIL: TestMultiNode/serial/AddNode (12.21s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (1.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-439307 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:239: expected to have label "minikube.k8s.io/commit" in node labels but got : 
-- stdout --
	[{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-439307","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9090b00fbf2832bf29026571965024d88b63d555","minikube.k8s.io/name":"multinode-439307","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_10_08T14_23_42_0700","minikube.k8s.io/version":"v1.37.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-439307-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9090b00fbf2832bf29026571965024d88b63d555","minikube.k8s.io/name":"multinode-439307","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_10_08T14_24_11_0700","minikube.k8s.io/version":"v1.37.0"},{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","ku
bernetes.io/hostname":"multinode-439307-m03","kubernetes.io/os":"linux"},]

                                                
                                                
-- /stdout --
multinode_test.go:239: expected to have label "minikube.k8s.io/version" in node labels but got : 
-- stdout --
	[{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-439307","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9090b00fbf2832bf29026571965024d88b63d555","minikube.k8s.io/name":"multinode-439307","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_10_08T14_23_42_0700","minikube.k8s.io/version":"v1.37.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-439307-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9090b00fbf2832bf29026571965024d88b63d555","minikube.k8s.io/name":"multinode-439307","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_10_08T14_24_11_0700","minikube.k8s.io/version":"v1.37.0"},{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","ku
bernetes.io/hostname":"multinode-439307-m03","kubernetes.io/os":"linux"},]

                                                
                                                
-- /stdout --
multinode_test.go:239: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
-- stdout --
	[{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-439307","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9090b00fbf2832bf29026571965024d88b63d555","minikube.k8s.io/name":"multinode-439307","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_10_08T14_23_42_0700","minikube.k8s.io/version":"v1.37.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-439307-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9090b00fbf2832bf29026571965024d88b63d555","minikube.k8s.io/name":"multinode-439307","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_10_08T14_24_11_0700","minikube.k8s.io/version":"v1.37.0"},{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","ku
bernetes.io/hostname":"multinode-439307-m03","kubernetes.io/os":"linux"},]

                                                
                                                
-- /stdout --
multinode_test.go:239: expected to have label "minikube.k8s.io/name" in node labels but got : 
-- stdout --
	[{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-439307","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9090b00fbf2832bf29026571965024d88b63d555","minikube.k8s.io/name":"multinode-439307","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_10_08T14_23_42_0700","minikube.k8s.io/version":"v1.37.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-439307-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9090b00fbf2832bf29026571965024d88b63d555","minikube.k8s.io/name":"multinode-439307","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_10_08T14_24_11_0700","minikube.k8s.io/version":"v1.37.0"},{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","ku
bernetes.io/hostname":"multinode-439307-m03","kubernetes.io/os":"linux"},]

                                                
                                                
-- /stdout --
multinode_test.go:239: expected to have label "minikube.k8s.io/primary" in node labels but got : 
-- stdout --
	[{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-439307","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9090b00fbf2832bf29026571965024d88b63d555","minikube.k8s.io/name":"multinode-439307","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_10_08T14_23_42_0700","minikube.k8s.io/version":"v1.37.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-439307-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9090b00fbf2832bf29026571965024d88b63d555","minikube.k8s.io/name":"multinode-439307","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_10_08T14_24_11_0700","minikube.k8s.io/version":"v1.37.0"},{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","ku
bernetes.io/hostname":"multinode-439307-m03","kubernetes.io/os":"linux"},]

                                                
                                                
-- /stdout --
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiNode/serial/MultiNodeLabels]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiNode/serial/MultiNodeLabels]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect multinode-439307
helpers_test.go:243: (dbg) docker inspect multinode-439307:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ba6a97f7663632c0b65e5a11595af47165321c8afb3b62eedf06b3d466307bba",
	        "Created": "2025-10-08T14:23:23.101908381Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 655454,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-08T14:23:23.137079331Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/ba6a97f7663632c0b65e5a11595af47165321c8afb3b62eedf06b3d466307bba/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ba6a97f7663632c0b65e5a11595af47165321c8afb3b62eedf06b3d466307bba/hostname",
	        "HostsPath": "/var/lib/docker/containers/ba6a97f7663632c0b65e5a11595af47165321c8afb3b62eedf06b3d466307bba/hosts",
	        "LogPath": "/var/lib/docker/containers/ba6a97f7663632c0b65e5a11595af47165321c8afb3b62eedf06b3d466307bba/ba6a97f7663632c0b65e5a11595af47165321c8afb3b62eedf06b3d466307bba-json.log",
	        "Name": "/multinode-439307",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-439307:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "multinode-439307",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ba6a97f7663632c0b65e5a11595af47165321c8afb3b62eedf06b3d466307bba",
	                "LowerDir": "/var/lib/docker/overlay2/eef9b106872faf72f2593d957c2542a8de83c33b483a2720ec6b85b17e327301-init/diff:/var/lib/docker/overlay2/97746716e496f19c0b3fdecffe1f175c04923b8f3f05ea2a8a25747dfddb9999/diff",
	                "MergedDir": "/var/lib/docker/overlay2/eef9b106872faf72f2593d957c2542a8de83c33b483a2720ec6b85b17e327301/merged",
	                "UpperDir": "/var/lib/docker/overlay2/eef9b106872faf72f2593d957c2542a8de83c33b483a2720ec6b85b17e327301/diff",
	                "WorkDir": "/var/lib/docker/overlay2/eef9b106872faf72f2593d957c2542a8de83c33b483a2720ec6b85b17e327301/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-439307",
	                "Source": "/var/lib/docker/volumes/multinode-439307/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-439307",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-439307",
	                "name.minikube.sigs.k8s.io": "multinode-439307",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dd4a1327be75cbe250d2a23b2c88f13f060fa136f90eabee1eecd426d6567242",
	            "SandboxKey": "/var/run/docker/netns/dd4a1327be75",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33306"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33307"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33310"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33308"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33309"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-439307": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ae:be:98:9b:84:54",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7e4823570a3f40e014e3b0688e11409f133ed3676e15bbaea99f537a7b7c50d6",
	                    "EndpointID": "60d8d5339fd7e699ccfe64b7708f7ac1dbc1925b92a76d8d9fc8cbcb32a7d344",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "multinode-439307",
	                        "ba6a97f76636"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-439307 -n multinode-439307
helpers_test.go:252: <<< TestMultiNode/serial/MultiNodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiNode/serial/MultiNodeLabels]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p multinode-439307 logs -n 25
helpers_test.go:260: TestMultiNode/serial/MultiNodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                              ARGS                                                              │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ mount-start-2-801712 ssh -- ls /minikube-host                                                                                  │ mount-start-2-801712 │ jenkins │ v1.37.0 │ 08 Oct 25 14:23 UTC │ 08 Oct 25 14:23 UTC │
	│ delete  │ -p mount-start-1-785074 --alsologtostderr -v=5                                                                                 │ mount-start-1-785074 │ jenkins │ v1.37.0 │ 08 Oct 25 14:23 UTC │ 08 Oct 25 14:23 UTC │
	│ ssh     │ mount-start-2-801712 ssh -- ls /minikube-host                                                                                  │ mount-start-2-801712 │ jenkins │ v1.37.0 │ 08 Oct 25 14:23 UTC │ 08 Oct 25 14:23 UTC │
	│ stop    │ -p mount-start-2-801712                                                                                                        │ mount-start-2-801712 │ jenkins │ v1.37.0 │ 08 Oct 25 14:23 UTC │ 08 Oct 25 14:23 UTC │
	│ start   │ -p mount-start-2-801712                                                                                                        │ mount-start-2-801712 │ jenkins │ v1.37.0 │ 08 Oct 25 14:23 UTC │ 08 Oct 25 14:23 UTC │
	│ ssh     │ mount-start-2-801712 ssh -- ls /minikube-host                                                                                  │ mount-start-2-801712 │ jenkins │ v1.37.0 │ 08 Oct 25 14:23 UTC │ 08 Oct 25 14:23 UTC │
	│ delete  │ -p mount-start-2-801712                                                                                                        │ mount-start-2-801712 │ jenkins │ v1.37.0 │ 08 Oct 25 14:23 UTC │ 08 Oct 25 14:23 UTC │
	│ delete  │ -p mount-start-1-785074                                                                                                        │ mount-start-1-785074 │ jenkins │ v1.37.0 │ 08 Oct 25 14:23 UTC │ 08 Oct 25 14:23 UTC │
	│ start   │ -p multinode-439307 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd │ multinode-439307     │ jenkins │ v1.37.0 │ 08 Oct 25 14:23 UTC │ 08 Oct 25 14:24 UTC │
	│ kubectl │ -p multinode-439307 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml                                              │ multinode-439307     │ jenkins │ v1.37.0 │ 08 Oct 25 14:24 UTC │ 08 Oct 25 14:24 UTC │
	│ kubectl │ -p multinode-439307 -- rollout status deployment/busybox                                                                       │ multinode-439307     │ jenkins │ v1.37.0 │ 08 Oct 25 14:24 UTC │ 08 Oct 25 14:24 UTC │
	│ kubectl │ -p multinode-439307 -- get pods -o jsonpath='{.items[*].status.podIP}'                                                         │ multinode-439307     │ jenkins │ v1.37.0 │ 08 Oct 25 14:24 UTC │ 08 Oct 25 14:24 UTC │
	│ kubectl │ -p multinode-439307 -- get pods -o jsonpath='{.items[*].metadata.name}'                                                        │ multinode-439307     │ jenkins │ v1.37.0 │ 08 Oct 25 14:24 UTC │ 08 Oct 25 14:24 UTC │
	│ kubectl │ -p multinode-439307 -- exec busybox-7b57f96db7-9qspn -- nslookup kubernetes.io                                                 │ multinode-439307     │ jenkins │ v1.37.0 │ 08 Oct 25 14:24 UTC │ 08 Oct 25 14:24 UTC │
	│ kubectl │ -p multinode-439307 -- exec busybox-7b57f96db7-n6rvn -- nslookup kubernetes.io                                                 │ multinode-439307     │ jenkins │ v1.37.0 │ 08 Oct 25 14:24 UTC │ 08 Oct 25 14:24 UTC │
	│ kubectl │ -p multinode-439307 -- exec busybox-7b57f96db7-9qspn -- nslookup kubernetes.default                                            │ multinode-439307     │ jenkins │ v1.37.0 │ 08 Oct 25 14:24 UTC │ 08 Oct 25 14:24 UTC │
	│ kubectl │ -p multinode-439307 -- exec busybox-7b57f96db7-n6rvn -- nslookup kubernetes.default                                            │ multinode-439307     │ jenkins │ v1.37.0 │ 08 Oct 25 14:24 UTC │ 08 Oct 25 14:24 UTC │
	│ kubectl │ -p multinode-439307 -- exec busybox-7b57f96db7-9qspn -- nslookup kubernetes.default.svc.cluster.local                          │ multinode-439307     │ jenkins │ v1.37.0 │ 08 Oct 25 14:24 UTC │ 08 Oct 25 14:24 UTC │
	│ kubectl │ -p multinode-439307 -- exec busybox-7b57f96db7-n6rvn -- nslookup kubernetes.default.svc.cluster.local                          │ multinode-439307     │ jenkins │ v1.37.0 │ 08 Oct 25 14:24 UTC │ 08 Oct 25 14:24 UTC │
	│ kubectl │ -p multinode-439307 -- get pods -o jsonpath='{.items[*].metadata.name}'                                                        │ multinode-439307     │ jenkins │ v1.37.0 │ 08 Oct 25 14:24 UTC │ 08 Oct 25 14:24 UTC │
	│ kubectl │ -p multinode-439307 -- exec busybox-7b57f96db7-9qspn -- sh -c nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3    │ multinode-439307     │ jenkins │ v1.37.0 │ 08 Oct 25 14:24 UTC │ 08 Oct 25 14:24 UTC │
	│ kubectl │ -p multinode-439307 -- exec busybox-7b57f96db7-9qspn -- sh -c ping -c 1 192.168.67.1                                           │ multinode-439307     │ jenkins │ v1.37.0 │ 08 Oct 25 14:24 UTC │ 08 Oct 25 14:24 UTC │
	│ kubectl │ -p multinode-439307 -- exec busybox-7b57f96db7-n6rvn -- sh -c nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3    │ multinode-439307     │ jenkins │ v1.37.0 │ 08 Oct 25 14:24 UTC │ 08 Oct 25 14:24 UTC │
	│ kubectl │ -p multinode-439307 -- exec busybox-7b57f96db7-n6rvn -- sh -c ping -c 1 192.168.67.1                                           │ multinode-439307     │ jenkins │ v1.37.0 │ 08 Oct 25 14:24 UTC │ 08 Oct 25 14:24 UTC │
	│ node    │ add -p multinode-439307 -v=5 --alsologtostderr                                                                                 │ multinode-439307     │ jenkins │ v1.37.0 │ 08 Oct 25 14:24 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 14:23:17
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 14:23:17.956987  654880 out.go:360] Setting OutFile to fd 1 ...
	I1008 14:23:17.957267  654880 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:23:17.957278  654880 out.go:374] Setting ErrFile to fd 2...
	I1008 14:23:17.957285  654880 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:23:17.957560  654880 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-513010/.minikube/bin
	I1008 14:23:17.958095  654880 out.go:368] Setting JSON to false
	I1008 14:23:17.959069  654880 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":7547,"bootTime":1759925851,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 14:23:17.959183  654880 start.go:141] virtualization: kvm guest
	I1008 14:23:17.961334  654880 out.go:179] * [multinode-439307] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1008 14:23:17.962856  654880 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 14:23:17.962854  654880 notify.go:220] Checking for updates...
	I1008 14:23:17.966278  654880 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 14:23:17.967770  654880 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-513010/kubeconfig
	I1008 14:23:17.969198  654880 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-513010/.minikube
	I1008 14:23:17.970595  654880 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 14:23:17.971850  654880 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 14:23:17.973258  654880 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 14:23:17.996300  654880 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1008 14:23:17.996406  654880 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 14:23:18.050277  654880 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-08 14:23:18.040372301 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 14:23:18.050390  654880 docker.go:318] overlay module found
	I1008 14:23:18.052374  654880 out.go:179] * Using the docker driver based on user configuration
	I1008 14:23:18.054067  654880 start.go:305] selected driver: docker
	I1008 14:23:18.054089  654880 start.go:925] validating driver "docker" against <nil>
	I1008 14:23:18.054101  654880 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 14:23:18.054660  654880 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 14:23:18.107655  654880 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-08 14:23:18.098187471 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 14:23:18.107832  654880 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1008 14:23:18.108067  654880 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 14:23:18.109831  654880 out.go:179] * Using Docker driver with root privileges
	I1008 14:23:18.111024  654880 cni.go:84] Creating CNI manager for ""
	I1008 14:23:18.111088  654880 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1008 14:23:18.111100  654880 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1008 14:23:18.111162  654880 start.go:349] cluster config:
	{Name:multinode-439307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:multinode-439307 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 14:23:18.112399  654880 out.go:179] * Starting "multinode-439307" primary control-plane node in "multinode-439307" cluster
	I1008 14:23:18.113554  654880 cache.go:123] Beginning downloading kic base image for docker with containerd
	I1008 14:23:18.114910  654880 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1008 14:23:18.116063  654880 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1008 14:23:18.116103  654880 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 14:23:18.116106  654880 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-513010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1008 14:23:18.116207  654880 cache.go:58] Caching tarball of preloaded images
	I1008 14:23:18.116291  654880 preload.go:233] Found /home/jenkins/minikube-integration/21681-513010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1008 14:23:18.116302  654880 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1008 14:23:18.116625  654880 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/config.json ...
	I1008 14:23:18.116652  654880 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/config.json: {Name:mk22bd6f1fa53f8e3127efb61d08a257a62e2626 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:23:18.136591  654880 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1008 14:23:18.136639  654880 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1008 14:23:18.136657  654880 cache.go:232] Successfully downloaded all kic artifacts
	I1008 14:23:18.136684  654880 start.go:360] acquireMachinesLock for multinode-439307: {Name:mkf4360b9146660aeff5a4ae109e04568869fc59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 14:23:18.136783  654880 start.go:364] duration metric: took 81.212µs to acquireMachinesLock for "multinode-439307"
	I1008 14:23:18.136807  654880 start.go:93] Provisioning new machine with config: &{Name:multinode-439307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:multinode-439307 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1008 14:23:18.136878  654880 start.go:125] createHost starting for "" (driver="docker")
	I1008 14:23:18.138834  654880 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1008 14:23:18.139088  654880 start.go:159] libmachine.API.Create for "multinode-439307" (driver="docker")
	I1008 14:23:18.139120  654880 client.go:168] LocalClient.Create starting
	I1008 14:23:18.139174  654880 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca.pem
	I1008 14:23:18.139205  654880 main.go:141] libmachine: Decoding PEM data...
	I1008 14:23:18.139219  654880 main.go:141] libmachine: Parsing certificate...
	I1008 14:23:18.139269  654880 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-513010/.minikube/certs/cert.pem
	I1008 14:23:18.139287  654880 main.go:141] libmachine: Decoding PEM data...
	I1008 14:23:18.139297  654880 main.go:141] libmachine: Parsing certificate...
	I1008 14:23:18.139588  654880 cli_runner.go:164] Run: docker network inspect multinode-439307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1008 14:23:18.155901  654880 cli_runner.go:211] docker network inspect multinode-439307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1008 14:23:18.155965  654880 network_create.go:284] running [docker network inspect multinode-439307] to gather additional debugging logs...
	I1008 14:23:18.156005  654880 cli_runner.go:164] Run: docker network inspect multinode-439307
	W1008 14:23:18.172653  654880 cli_runner.go:211] docker network inspect multinode-439307 returned with exit code 1
	I1008 14:23:18.172693  654880 network_create.go:287] error running [docker network inspect multinode-439307]: docker network inspect multinode-439307: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-439307 not found
	I1008 14:23:18.172713  654880 network_create.go:289] output of [docker network inspect multinode-439307]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-439307 not found
	
	** /stderr **
	I1008 14:23:18.172884  654880 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 14:23:18.189934  654880 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-579739baec73 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:82:69:9e:8b:7e:c1} reservation:<nil>}
	I1008 14:23:18.190282  654880 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-de056d86a4f7 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e2:00:90:f6:d9:cb} reservation:<nil>}
	I1008 14:23:18.190681  654880 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d36540}
	I1008 14:23:18.190708  654880 network_create.go:124] attempt to create docker network multinode-439307 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1008 14:23:18.190760  654880 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-439307 multinode-439307
	I1008 14:23:18.248882  654880 network_create.go:108] docker network multinode-439307 192.168.67.0/24 created
	I1008 14:23:18.248914  654880 kic.go:121] calculated static IP "192.168.67.2" for the "multinode-439307" container
	I1008 14:23:18.249056  654880 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1008 14:23:18.266495  654880 cli_runner.go:164] Run: docker volume create multinode-439307 --label name.minikube.sigs.k8s.io=multinode-439307 --label created_by.minikube.sigs.k8s.io=true
	I1008 14:23:18.284793  654880 oci.go:103] Successfully created a docker volume multinode-439307
	I1008 14:23:18.284903  654880 cli_runner.go:164] Run: docker run --rm --name multinode-439307-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-439307 --entrypoint /usr/bin/test -v multinode-439307:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1008 14:23:18.663779  654880 oci.go:107] Successfully prepared a docker volume multinode-439307
	I1008 14:23:18.663869  654880 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1008 14:23:18.663894  654880 kic.go:194] Starting extracting preloaded images to volume ...
	I1008 14:23:18.663972  654880 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-513010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-439307:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1008 14:23:23.029420  654880 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-513010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-439307:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.36536671s)
	I1008 14:23:23.029457  654880 kic.go:203] duration metric: took 4.365557889s to extract preloaded images to volume ...
	W1008 14:23:23.029548  654880 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1008 14:23:23.029580  654880 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1008 14:23:23.029617  654880 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1008 14:23:23.086211  654880 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-439307 --name multinode-439307 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-439307 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-439307 --network multinode-439307 --ip 192.168.67.2 --volume multinode-439307:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1008 14:23:23.354039  654880 cli_runner.go:164] Run: docker container inspect multinode-439307 --format={{.State.Running}}
	I1008 14:23:23.375428  654880 cli_runner.go:164] Run: docker container inspect multinode-439307 --format={{.State.Status}}
	I1008 14:23:23.393578  654880 cli_runner.go:164] Run: docker exec multinode-439307 stat /var/lib/dpkg/alternatives/iptables
	I1008 14:23:23.439666  654880 oci.go:144] the created container "multinode-439307" has a running status.
	I1008 14:23:23.439697  654880 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307/id_rsa...
	I1008 14:23:23.880004  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1008 14:23:23.880071  654880 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1008 14:23:23.906419  654880 cli_runner.go:164] Run: docker container inspect multinode-439307 --format={{.State.Status}}
	I1008 14:23:23.924677  654880 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1008 14:23:23.924697  654880 kic_runner.go:114] Args: [docker exec --privileged multinode-439307 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1008 14:23:23.977234  654880 cli_runner.go:164] Run: docker container inspect multinode-439307 --format={{.State.Status}}
	I1008 14:23:23.994199  654880 machine.go:93] provisionDockerMachine start ...
	I1008 14:23:23.994313  654880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307
	I1008 14:23:24.011535  654880 main.go:141] libmachine: Using SSH client type: native
	I1008 14:23:24.011821  654880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33306 <nil> <nil>}
	I1008 14:23:24.011834  654880 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 14:23:24.012536  654880 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47140->127.0.0.1:33306: read: connection reset by peer
	I1008 14:23:27.162380  654880 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-439307
	
	I1008 14:23:27.162426  654880 ubuntu.go:182] provisioning hostname "multinode-439307"
	I1008 14:23:27.162486  654880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307
	I1008 14:23:27.180732  654880 main.go:141] libmachine: Using SSH client type: native
	I1008 14:23:27.180972  654880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33306 <nil> <nil>}
	I1008 14:23:27.181011  654880 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-439307 && echo "multinode-439307" | sudo tee /etc/hostname
	I1008 14:23:27.339937  654880 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-439307
	
	I1008 14:23:27.340069  654880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307
	I1008 14:23:27.358403  654880 main.go:141] libmachine: Using SSH client type: native
	I1008 14:23:27.358642  654880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33306 <nil> <nil>}
	I1008 14:23:27.358660  654880 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-439307' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-439307/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-439307' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 14:23:27.507072  654880 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 14:23:27.507109  654880 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-513010/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-513010/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-513010/.minikube}
	I1008 14:23:27.507134  654880 ubuntu.go:190] setting up certificates
	I1008 14:23:27.507146  654880 provision.go:84] configureAuth start
	I1008 14:23:27.507227  654880 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-439307
	I1008 14:23:27.525723  654880 provision.go:143] copyHostCerts
	I1008 14:23:27.525774  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21681-513010/.minikube/cert.pem
	I1008 14:23:27.525813  654880 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-513010/.minikube/cert.pem, removing ...
	I1008 14:23:27.525825  654880 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-513010/.minikube/cert.pem
	I1008 14:23:27.525916  654880 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-513010/.minikube/cert.pem (1123 bytes)
	I1008 14:23:27.526089  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21681-513010/.minikube/key.pem
	I1008 14:23:27.526119  654880 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-513010/.minikube/key.pem, removing ...
	I1008 14:23:27.526129  654880 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-513010/.minikube/key.pem
	I1008 14:23:27.526175  654880 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-513010/.minikube/key.pem (1675 bytes)
	I1008 14:23:27.526250  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21681-513010/.minikube/ca.pem
	I1008 14:23:27.526274  654880 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-513010/.minikube/ca.pem, removing ...
	I1008 14:23:27.526283  654880 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-513010/.minikube/ca.pem
	I1008 14:23:27.526323  654880 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-513010/.minikube/ca.pem (1078 bytes)
	I1008 14:23:27.526398  654880 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-513010/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca-key.pem org=jenkins.multinode-439307 san=[127.0.0.1 192.168.67.2 localhost minikube multinode-439307]
	I1008 14:23:27.677124  654880 provision.go:177] copyRemoteCerts
	I1008 14:23:27.677186  654880 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 14:23:27.677229  654880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307
	I1008 14:23:27.696280  654880 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33306 SSHKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307/id_rsa Username:docker}
	I1008 14:23:27.800677  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1008 14:23:27.800760  654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1008 14:23:27.821249  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1008 14:23:27.821317  654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1008 14:23:27.839198  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1008 14:23:27.839275  654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1008 14:23:27.857535  654880 provision.go:87] duration metric: took 350.370022ms to configureAuth
	I1008 14:23:27.857570  654880 ubuntu.go:206] setting minikube options for container-runtime
	I1008 14:23:27.857755  654880 config.go:182] Loaded profile config "multinode-439307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1008 14:23:27.857770  654880 machine.go:96] duration metric: took 3.8635448s to provisionDockerMachine
	I1008 14:23:27.857780  654880 client.go:171] duration metric: took 9.718653028s to LocalClient.Create
	I1008 14:23:27.857826  654880 start.go:167] duration metric: took 9.718739942s to libmachine.API.Create "multinode-439307"
	I1008 14:23:27.857838  654880 start.go:293] postStartSetup for "multinode-439307" (driver="docker")
	I1008 14:23:27.857849  654880 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 14:23:27.857921  654880 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 14:23:27.857970  654880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307
	I1008 14:23:27.876246  654880 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33306 SSHKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307/id_rsa Username:docker}
	I1008 14:23:27.982611  654880 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 14:23:27.986361  654880 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 14:23:27.986391  654880 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 14:23:27.986402  654880 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-513010/.minikube/addons for local assets ...
	I1008 14:23:27.986465  654880 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-513010/.minikube/files for local assets ...
	I1008 14:23:27.986549  654880 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-513010/.minikube/files/etc/ssl/certs/5167872.pem -> 5167872.pem in /etc/ssl/certs
	I1008 14:23:27.986586  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/files/etc/ssl/certs/5167872.pem -> /etc/ssl/certs/5167872.pem
	I1008 14:23:27.986676  654880 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 14:23:27.994501  654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/files/etc/ssl/certs/5167872.pem --> /etc/ssl/certs/5167872.pem (1708 bytes)
	I1008 14:23:28.015944  654880 start.go:296] duration metric: took 158.091308ms for postStartSetup
	I1008 14:23:28.016330  654880 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-439307
	I1008 14:23:28.033722  654880 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/config.json ...
	I1008 14:23:28.034024  654880 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 14:23:28.034069  654880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307
	I1008 14:23:28.051472  654880 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33306 SSHKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307/id_rsa Username:docker}
	I1008 14:23:28.152696  654880 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 14:23:28.157578  654880 start.go:128] duration metric: took 10.02068325s to createHost
	I1008 14:23:28.157607  654880 start.go:83] releasing machines lock for "multinode-439307", held for 10.020812018s
	I1008 14:23:28.157686  654880 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-439307
	I1008 14:23:28.175043  654880 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 14:23:28.175118  654880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307
	I1008 14:23:28.175044  654880 ssh_runner.go:195] Run: cat /version.json
	I1008 14:23:28.175238  654880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307
	I1008 14:23:28.192842  654880 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33306 SSHKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307/id_rsa Username:docker}
	I1008 14:23:28.193859  654880 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33306 SSHKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307/id_rsa Username:docker}
	I1008 14:23:28.346450  654880 ssh_runner.go:195] Run: systemctl --version
	I1008 14:23:28.353340  654880 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 14:23:28.358122  654880 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 14:23:28.358188  654880 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 14:23:28.384439  654880 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 14:23:28.384464  654880 start.go:495] detecting cgroup driver to use...
	I1008 14:23:28.384495  654880 detect.go:190] detected "systemd" cgroup driver on host os
	I1008 14:23:28.384566  654880 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1008 14:23:28.399323  654880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1008 14:23:28.412378  654880 docker.go:218] disabling cri-docker service (if available) ...
	I1008 14:23:28.412440  654880 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 14:23:28.428847  654880 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 14:23:28.446687  654880 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 14:23:28.526136  654880 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 14:23:28.614080  654880 docker.go:234] disabling docker service ...
	I1008 14:23:28.614149  654880 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 14:23:28.633742  654880 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 14:23:28.647026  654880 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 14:23:28.727238  654880 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 14:23:28.808930  654880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 14:23:28.821761  654880 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 14:23:28.836040  654880 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1008 14:23:28.847491  654880 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1008 14:23:28.856854  654880 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1008 14:23:28.856920  654880 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1008 14:23:28.866133  654880 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1008 14:23:28.875367  654880 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1008 14:23:28.884374  654880 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1008 14:23:28.893574  654880 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 14:23:28.902220  654880 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1008 14:23:28.911486  654880 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1008 14:23:28.920623  654880 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1008 14:23:28.929996  654880 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 14:23:28.937926  654880 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 14:23:28.946203  654880 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 14:23:29.028153  654880 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1008 14:23:29.132493  654880 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1008 14:23:29.132559  654880 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1008 14:23:29.136824  654880 start.go:563] Will wait 60s for crictl version
	I1008 14:23:29.136879  654880 ssh_runner.go:195] Run: which crictl
	I1008 14:23:29.140620  654880 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 14:23:29.166990  654880 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.28
	RuntimeApiVersion:  v1
	I1008 14:23:29.167069  654880 ssh_runner.go:195] Run: containerd --version
	I1008 14:23:29.193758  654880 ssh_runner.go:195] Run: containerd --version
	I1008 14:23:29.222040  654880 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 1.7.28 ...
	I1008 14:23:29.223401  654880 cli_runner.go:164] Run: docker network inspect multinode-439307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 14:23:29.240948  654880 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I1008 14:23:29.245849  654880 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 14:23:29.256781  654880 kubeadm.go:883] updating cluster {Name:multinode-439307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:multinode-439307 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 14:23:29.256900  654880 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1008 14:23:29.256945  654880 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 14:23:29.282114  654880 containerd.go:627] all images are preloaded for containerd runtime.
	I1008 14:23:29.282137  654880 containerd.go:534] Images already preloaded, skipping extraction
	I1008 14:23:29.282188  654880 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 14:23:29.306940  654880 containerd.go:627] all images are preloaded for containerd runtime.
	I1008 14:23:29.306963  654880 cache_images.go:85] Images are preloaded, skipping loading
	I1008 14:23:29.306971  654880 kubeadm.go:934] updating node { 192.168.67.2 8443 v1.34.1 containerd true true} ...
	I1008 14:23:29.307091  654880 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-439307 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:multinode-439307 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 14:23:29.307158  654880 ssh_runner.go:195] Run: sudo crictl info
	I1008 14:23:29.333006  654880 cni.go:84] Creating CNI manager for ""
	I1008 14:23:29.333038  654880 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1008 14:23:29.333058  654880 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 14:23:29.333091  654880 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-439307 NodeName:multinode-439307 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 14:23:29.333227  654880 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "multinode-439307"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.67.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 14:23:29.333298  654880 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 14:23:29.341693  654880 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 14:23:29.341752  654880 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 14:23:29.350015  654880 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I1008 14:23:29.363305  654880 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 14:23:29.379485  654880 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2228 bytes)
	I1008 14:23:29.392555  654880 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I1008 14:23:29.396398  654880 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 14:23:29.406631  654880 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 14:23:29.483438  654880 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 14:23:29.509514  654880 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307 for IP: 192.168.67.2
	I1008 14:23:29.509542  654880 certs.go:195] generating shared ca certs ...
	I1008 14:23:29.509563  654880 certs.go:227] acquiring lock for ca certs: {Name:mk57aa9b2383fcc0908491da1ce926c707ff69a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:23:29.509734  654880 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-513010/.minikube/ca.key
	I1008 14:23:29.509788  654880 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-513010/.minikube/proxy-client-ca.key
	I1008 14:23:29.509802  654880 certs.go:257] generating profile certs ...
	I1008 14:23:29.509910  654880 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/client.key
	I1008 14:23:29.509939  654880 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/client.crt with IP's: []
	I1008 14:23:29.610645  654880 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/client.crt ...
	I1008 14:23:29.610679  654880 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/client.crt: {Name:mkf1a19119257c35c0be4630341107abefe0712a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:23:29.610870  654880 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/client.key ...
	I1008 14:23:29.610891  654880 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/client.key: {Name:mk49a676c10aed18805a93ab7df3049b7dcfa5b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:23:29.610988  654880 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/apiserver.key.4f7cecc8
	I1008 14:23:29.611006  654880 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/apiserver.crt.4f7cecc8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.67.2]
	I1008 14:23:29.809665  654880 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/apiserver.crt.4f7cecc8 ...
	I1008 14:23:29.809701  654880 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/apiserver.crt.4f7cecc8: {Name:mk049ea208d229fa055039856d3579ebb9e0840d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:23:29.809887  654880 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/apiserver.key.4f7cecc8 ...
	I1008 14:23:29.809902  654880 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/apiserver.key.4f7cecc8: {Name:mkbbd81466b2cdd0cb264ee782d6df895a6557f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:23:29.809991  654880 certs.go:382] copying /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/apiserver.crt.4f7cecc8 -> /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/apiserver.crt
	I1008 14:23:29.810098  654880 certs.go:386] copying /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/apiserver.key.4f7cecc8 -> /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/apiserver.key
	I1008 14:23:29.810163  654880 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/proxy-client.key
	I1008 14:23:29.810178  654880 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/proxy-client.crt with IP's: []
	I1008 14:23:30.434846  654880 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/proxy-client.crt ...
	I1008 14:23:30.434880  654880 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/proxy-client.crt: {Name:mk74033eb7b0061c1da9d5a1860ee35ec43567a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:23:30.435058  654880 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/proxy-client.key ...
	I1008 14:23:30.435073  654880 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/proxy-client.key: {Name:mkb2b7339b2c5bc4801b86127d693ce13ee35f2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:23:30.435152  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1008 14:23:30.435180  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1008 14:23:30.435191  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1008 14:23:30.435204  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1008 14:23:30.435216  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1008 14:23:30.435226  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1008 14:23:30.435239  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1008 14:23:30.435249  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1008 14:23:30.435302  654880 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/516787.pem (1338 bytes)
	W1008 14:23:30.435341  654880 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-513010/.minikube/certs/516787_empty.pem, impossibly tiny 0 bytes
	I1008 14:23:30.435351  654880 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 14:23:30.435377  654880 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca.pem (1078 bytes)
	I1008 14:23:30.435399  654880 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/cert.pem (1123 bytes)
	I1008 14:23:30.435419  654880 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/key.pem (1675 bytes)
	I1008 14:23:30.435456  654880 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-513010/.minikube/files/etc/ssl/certs/5167872.pem (1708 bytes)
	I1008 14:23:30.435480  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:23:30.435493  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/516787.pem -> /usr/share/ca-certificates/516787.pem
	I1008 14:23:30.435505  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/files/etc/ssl/certs/5167872.pem -> /usr/share/ca-certificates/5167872.pem
	I1008 14:23:30.436154  654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 14:23:30.454787  654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1008 14:23:30.472361  654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 14:23:30.489956  654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1008 14:23:30.507583  654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1008 14:23:30.525415  654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1008 14:23:30.543120  654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 14:23:30.560854  654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1008 14:23:30.578730  654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 14:23:30.599796  654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/certs/516787.pem --> /usr/share/ca-certificates/516787.pem (1338 bytes)
	I1008 14:23:30.617312  654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/files/etc/ssl/certs/5167872.pem --> /usr/share/ca-certificates/5167872.pem (1708 bytes)
	I1008 14:23:30.635626  654880 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 14:23:30.648875  654880 ssh_runner.go:195] Run: openssl version
	I1008 14:23:30.655674  654880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/516787.pem && ln -fs /usr/share/ca-certificates/516787.pem /etc/ssl/certs/516787.pem"
	I1008 14:23:30.664582  654880 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/516787.pem
	I1008 14:23:30.668786  654880 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 14:09 /usr/share/ca-certificates/516787.pem
	I1008 14:23:30.668853  654880 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/516787.pem
	I1008 14:23:30.703803  654880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/516787.pem /etc/ssl/certs/51391683.0"
	I1008 14:23:30.713696  654880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5167872.pem && ln -fs /usr/share/ca-certificates/5167872.pem /etc/ssl/certs/5167872.pem"
	I1008 14:23:30.722925  654880 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5167872.pem
	I1008 14:23:30.726802  654880 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 14:09 /usr/share/ca-certificates/5167872.pem
	I1008 14:23:30.726862  654880 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5167872.pem
	I1008 14:23:30.760940  654880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5167872.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 14:23:30.770017  654880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 14:23:30.778517  654880 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:23:30.782405  654880 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 14:03 /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:23:30.782465  654880 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:23:30.816706  654880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 14:23:30.825787  654880 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 14:23:30.829676  654880 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1008 14:23:30.829741  654880 kubeadm.go:400] StartCluster: {Name:multinode-439307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:multinode-439307 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 14:23:30.829825  654880 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1008 14:23:30.829872  654880 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 14:23:30.857015  654880 cri.go:89] found id: ""
	I1008 14:23:30.857078  654880 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 14:23:30.865318  654880 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 14:23:30.873182  654880 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1008 14:23:30.873235  654880 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 14:23:30.880797  654880 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 14:23:30.880817  654880 kubeadm.go:157] found existing configuration files:
	
	I1008 14:23:30.880879  654880 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 14:23:30.888347  654880 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 14:23:30.888425  654880 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 14:23:30.895504  654880 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 14:23:30.903314  654880 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 14:23:30.903371  654880 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 14:23:30.911037  654880 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 14:23:30.918990  654880 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 14:23:30.919046  654880 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 14:23:30.927124  654880 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 14:23:30.935194  654880 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 14:23:30.935282  654880 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 14:23:30.943051  654880 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1008 14:23:31.011073  654880 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1008 14:23:31.072669  654880 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 14:23:42.013295  654880 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1008 14:23:42.013386  654880 kubeadm.go:318] [preflight] Running pre-flight checks
	I1008 14:23:42.013526  654880 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1008 14:23:42.013610  654880 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1008 14:23:42.013681  654880 kubeadm.go:318] OS: Linux
	I1008 14:23:42.013738  654880 kubeadm.go:318] CGROUPS_CPU: enabled
	I1008 14:23:42.013787  654880 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1008 14:23:42.013830  654880 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1008 14:23:42.013874  654880 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1008 14:23:42.013925  654880 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1008 14:23:42.014006  654880 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1008 14:23:42.014054  654880 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1008 14:23:42.014092  654880 kubeadm.go:318] CGROUPS_IO: enabled
	I1008 14:23:42.014187  654880 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 14:23:42.014301  654880 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 14:23:42.014382  654880 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 14:23:42.014436  654880 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 14:23:42.015973  654880 out.go:252]   - Generating certificates and keys ...
	I1008 14:23:42.016057  654880 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1008 14:23:42.016112  654880 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1008 14:23:42.016189  654880 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1008 14:23:42.016266  654880 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1008 14:23:42.016339  654880 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1008 14:23:42.016411  654880 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1008 14:23:42.016496  654880 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1008 14:23:42.016630  654880 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-439307] and IPs [192.168.67.2 127.0.0.1 ::1]
	I1008 14:23:42.016681  654880 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1008 14:23:42.016787  654880 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-439307] and IPs [192.168.67.2 127.0.0.1 ::1]
	I1008 14:23:42.016843  654880 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1008 14:23:42.016903  654880 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1008 14:23:42.016945  654880 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1008 14:23:42.017040  654880 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 14:23:42.017097  654880 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 14:23:42.017144  654880 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 14:23:42.017213  654880 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 14:23:42.017286  654880 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 14:23:42.017348  654880 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 14:23:42.017478  654880 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 14:23:42.017571  654880 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 14:23:42.019103  654880 out.go:252]   - Booting up control plane ...
	I1008 14:23:42.019195  654880 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 14:23:42.019290  654880 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 14:23:42.019381  654880 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 14:23:42.019498  654880 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 14:23:42.019651  654880 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1008 14:23:42.019758  654880 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1008 14:23:42.019874  654880 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 14:23:42.019923  654880 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1008 14:23:42.020112  654880 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 14:23:42.020255  654880 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 14:23:42.020363  654880 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 500.925821ms
	I1008 14:23:42.020445  654880 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1008 14:23:42.020510  654880 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.67.2:8443/livez
	I1008 14:23:42.020603  654880 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1008 14:23:42.020682  654880 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1008 14:23:42.020747  654880 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.97144594s
	I1008 14:23:42.020832  654880 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.662946335s
	I1008 14:23:42.020919  654880 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.501501466s
	I1008 14:23:42.021101  654880 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1008 14:23:42.021289  654880 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1008 14:23:42.021368  654880 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1008 14:23:42.021621  654880 kubeadm.go:318] [mark-control-plane] Marking the node multinode-439307 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1008 14:23:42.021687  654880 kubeadm.go:318] [bootstrap-token] Using token: i5r6w0.sj0dfahq56oi5osn
	I1008 14:23:42.023115  654880 out.go:252]   - Configuring RBAC rules ...
	I1008 14:23:42.023282  654880 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1008 14:23:42.023409  654880 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1008 14:23:42.023542  654880 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1008 14:23:42.023709  654880 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1008 14:23:42.023851  654880 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1008 14:23:42.023949  654880 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1008 14:23:42.024072  654880 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1008 14:23:42.024109  654880 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1008 14:23:42.024148  654880 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1008 14:23:42.024154  654880 kubeadm.go:318] 
	I1008 14:23:42.024215  654880 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1008 14:23:42.024224  654880 kubeadm.go:318] 
	I1008 14:23:42.024309  654880 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1008 14:23:42.024319  654880 kubeadm.go:318] 
	I1008 14:23:42.024361  654880 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1008 14:23:42.024433  654880 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1008 14:23:42.024475  654880 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1008 14:23:42.024485  654880 kubeadm.go:318] 
	I1008 14:23:42.024537  654880 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1008 14:23:42.024543  654880 kubeadm.go:318] 
	I1008 14:23:42.024588  654880 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1008 14:23:42.024595  654880 kubeadm.go:318] 
	I1008 14:23:42.024647  654880 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1008 14:23:42.024727  654880 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1008 14:23:42.024793  654880 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1008 14:23:42.024806  654880 kubeadm.go:318] 
	I1008 14:23:42.024904  654880 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1008 14:23:42.025017  654880 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1008 14:23:42.025034  654880 kubeadm.go:318] 
	I1008 14:23:42.025112  654880 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token i5r6w0.sj0dfahq56oi5osn \
	I1008 14:23:42.025201  654880 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:29d08006b7495c78b5f27aaa9701a82f373226e18456de05d156e89bccfbd06f \
	I1008 14:23:42.025232  654880 kubeadm.go:318] 	--control-plane 
	I1008 14:23:42.025242  654880 kubeadm.go:318] 
	I1008 14:23:42.025327  654880 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1008 14:23:42.025334  654880 kubeadm.go:318] 
	I1008 14:23:42.025424  654880 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token i5r6w0.sj0dfahq56oi5osn \
	I1008 14:23:42.025535  654880 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:29d08006b7495c78b5f27aaa9701a82f373226e18456de05d156e89bccfbd06f 
	I1008 14:23:42.025548  654880 cni.go:84] Creating CNI manager for ""
	I1008 14:23:42.025554  654880 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1008 14:23:42.027007  654880 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1008 14:23:42.028122  654880 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1008 14:23:42.033376  654880 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1008 14:23:42.033399  654880 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1008 14:23:42.047336  654880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1008 14:23:42.257680  654880 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1008 14:23:42.257777  654880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 14:23:42.257788  654880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-439307 minikube.k8s.io/updated_at=2025_10_08T14_23_42_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9090b00fbf2832bf29026571965024d88b63d555 minikube.k8s.io/name=multinode-439307 minikube.k8s.io/primary=true
	I1008 14:23:42.267920  654880 ops.go:34] apiserver oom_adj: -16
	I1008 14:23:42.333752  654880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 14:23:42.834103  654880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 14:23:43.334513  654880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 14:23:43.834031  654880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 14:23:44.334151  654880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 14:23:44.834515  654880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 14:23:45.334213  654880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 14:23:45.834573  654880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 14:23:46.334831  654880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 14:23:46.833924  654880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 14:23:46.910005  654880 kubeadm.go:1113] duration metric: took 4.652297133s to wait for elevateKubeSystemPrivileges
	I1008 14:23:46.910044  654880 kubeadm.go:402] duration metric: took 16.080310474s to StartCluster
	I1008 14:23:46.910065  654880 settings.go:142] acquiring lock: {Name:mk8e4c0f084ac2281293848ef8bd3096692e3417 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:23:46.910151  654880 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21681-513010/kubeconfig
	I1008 14:23:46.910878  654880 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-513010/kubeconfig: {Name:mk629eb0239182a6659e3d616a150e5234772a5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:23:46.911151  654880 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1008 14:23:46.911192  654880 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1008 14:23:46.911219  654880 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 14:23:46.911355  654880 addons.go:69] Setting storage-provisioner=true in profile "multinode-439307"
	I1008 14:23:46.911395  654880 addons.go:238] Setting addon storage-provisioner=true in "multinode-439307"
	I1008 14:23:46.911396  654880 addons.go:69] Setting default-storageclass=true in profile "multinode-439307"
	I1008 14:23:46.911426  654880 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-439307"
	I1008 14:23:46.911435  654880 host.go:66] Checking if "multinode-439307" exists ...
	I1008 14:23:46.911401  654880 config.go:182] Loaded profile config "multinode-439307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1008 14:23:46.911826  654880 cli_runner.go:164] Run: docker container inspect multinode-439307 --format={{.State.Status}}
	I1008 14:23:46.912016  654880 cli_runner.go:164] Run: docker container inspect multinode-439307 --format={{.State.Status}}
	I1008 14:23:46.912657  654880 out.go:179] * Verifying Kubernetes components...
	I1008 14:23:46.917571  654880 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 14:23:46.938275  654880 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 14:23:46.938669  654880 kapi.go:59] client config for multinode-439307: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/client.crt", KeyFile:"/home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/client.key", CAFile:"/home/jenkins/minikube-integration/21681-513010/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1008 14:23:46.939674  654880 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1008 14:23:46.939699  654880 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1008 14:23:46.939706  654880 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1008 14:23:46.939712  654880 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1008 14:23:46.939717  654880 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1008 14:23:46.939726  654880 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1008 14:23:46.940295  654880 addons.go:238] Setting addon default-storageclass=true in "multinode-439307"
	I1008 14:23:46.940373  654880 host.go:66] Checking if "multinode-439307" exists ...
	I1008 14:23:46.940553  654880 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:23:46.940574  654880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 14:23:46.940644  654880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307
	I1008 14:23:46.940902  654880 cli_runner.go:164] Run: docker container inspect multinode-439307 --format={{.State.Status}}
	I1008 14:23:46.975775  654880 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33306 SSHKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307/id_rsa Username:docker}
	I1008 14:23:46.977730  654880 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 14:23:46.977762  654880 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 14:23:46.977823  654880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307
	I1008 14:23:47.019509  654880 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33306 SSHKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307/id_rsa Username:docker}
	I1008 14:23:47.062130  654880 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1008 14:23:47.114509  654880 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 14:23:47.131279  654880 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:23:47.146387  654880 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:23:47.235407  654880 start.go:976] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS's ConfigMap
	I1008 14:23:47.236068  654880 kapi.go:59] client config for multinode-439307: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/client.crt", KeyFile:"/home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/client.key", CAFile:"/home/jenkins/minikube-integration/21681-513010/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1008 14:23:47.236068  654880 kapi.go:59] client config for multinode-439307: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/client.crt", KeyFile:"/home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/client.key", CAFile:"/home/jenkins/minikube-integration/21681-513010/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1008 14:23:47.236491  654880 node_ready.go:35] waiting up to 6m0s for node "multinode-439307" to be "Ready" ...
	I1008 14:23:47.437485  654880 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1008 14:23:47.438409  654880 addons.go:514] duration metric: took 527.189163ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1008 14:23:47.740068  654880 kapi.go:214] "coredns" deployment in "kube-system" namespace and "multinode-439307" context rescaled to 1 replicas
	W1008 14:23:49.240140  654880 node_ready.go:57] node "multinode-439307" has "Ready":"False" status (will retry)
	W1008 14:23:51.240341  654880 node_ready.go:57] node "multinode-439307" has "Ready":"False" status (will retry)
	W1008 14:23:53.740468  654880 node_ready.go:57] node "multinode-439307" has "Ready":"False" status (will retry)
	W1008 14:23:55.740674  654880 node_ready.go:57] node "multinode-439307" has "Ready":"False" status (will retry)
	I1008 14:23:58.240406  654880 node_ready.go:49] node "multinode-439307" is "Ready"
	I1008 14:23:58.240442  654880 node_ready.go:38] duration metric: took 11.003905737s for node "multinode-439307" to be "Ready" ...
	I1008 14:23:58.240462  654880 api_server.go:52] waiting for apiserver process to appear ...
	I1008 14:23:58.240528  654880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:23:58.256864  654880 api_server.go:72] duration metric: took 11.345663766s to wait for apiserver process to appear ...
	I1008 14:23:58.256909  654880 api_server.go:88] waiting for apiserver healthz status ...
	I1008 14:23:58.256937  654880 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1008 14:23:58.261705  654880 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1008 14:23:58.262918  654880 api_server.go:141] control plane version: v1.34.1
	I1008 14:23:58.262945  654880 api_server.go:131] duration metric: took 6.028377ms to wait for apiserver health ...
	I1008 14:23:58.262956  654880 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 14:23:58.267800  654880 system_pods.go:59] 8 kube-system pods found
	I1008 14:23:58.267853  654880 system_pods.go:61] "coredns-66bc5c9577-llvkc" [a445b5ef-8d30-4b7c-a40f-77f2a9072e7f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 14:23:58.267870  654880 system_pods.go:61] "etcd-multinode-439307" [1989112b-ab3b-4883-9f2c-19ee41565704] Running
	I1008 14:23:58.267878  654880 system_pods.go:61] "kindnet-l6pqj" [fea0f284-17d4-438c-91a6-14831ce6ce5c] Running
	I1008 14:23:58.267884  654880 system_pods.go:61] "kube-apiserver-multinode-439307" [18f77e80-010e-4779-9028-6093a55219c5] Running
	I1008 14:23:58.267889  654880 system_pods.go:61] "kube-controller-manager-multinode-439307" [f4954c96-43a5-408b-a99e-423ab197e112] Running
	I1008 14:23:58.267903  654880 system_pods.go:61] "kube-proxy-sjzfx" [1211872c-1472-435c-a117-2656ba2fca8e] Running
	I1008 14:23:58.267908  654880 system_pods.go:61] "kube-scheduler-multinode-439307" [a940c86a-bd75-4100-86f5-0b6a53040f2b] Running
	I1008 14:23:58.267914  654880 system_pods.go:61] "storage-provisioner" [e1d410c3-de2a-4e2a-88c1-93970ce8b254] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1008 14:23:58.267923  654880 system_pods.go:74] duration metric: took 4.960123ms to wait for pod list to return data ...
	I1008 14:23:58.267935  654880 default_sa.go:34] waiting for default service account to be created ...
	I1008 14:23:58.270747  654880 default_sa.go:45] found service account: "default"
	I1008 14:23:58.270770  654880 default_sa.go:55] duration metric: took 2.828587ms for default service account to be created ...
	I1008 14:23:58.270784  654880 system_pods.go:116] waiting for k8s-apps to be running ...
	I1008 14:23:58.273850  654880 system_pods.go:86] 8 kube-system pods found
	I1008 14:23:58.273881  654880 system_pods.go:89] "coredns-66bc5c9577-llvkc" [a445b5ef-8d30-4b7c-a40f-77f2a9072e7f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 14:23:58.273886  654880 system_pods.go:89] "etcd-multinode-439307" [1989112b-ab3b-4883-9f2c-19ee41565704] Running
	I1008 14:23:58.273892  654880 system_pods.go:89] "kindnet-l6pqj" [fea0f284-17d4-438c-91a6-14831ce6ce5c] Running
	I1008 14:23:58.273896  654880 system_pods.go:89] "kube-apiserver-multinode-439307" [18f77e80-010e-4779-9028-6093a55219c5] Running
	I1008 14:23:58.273899  654880 system_pods.go:89] "kube-controller-manager-multinode-439307" [f4954c96-43a5-408b-a99e-423ab197e112] Running
	I1008 14:23:58.273903  654880 system_pods.go:89] "kube-proxy-sjzfx" [1211872c-1472-435c-a117-2656ba2fca8e] Running
	I1008 14:23:58.273911  654880 system_pods.go:89] "kube-scheduler-multinode-439307" [a940c86a-bd75-4100-86f5-0b6a53040f2b] Running
	I1008 14:23:58.273916  654880 system_pods.go:89] "storage-provisioner" [e1d410c3-de2a-4e2a-88c1-93970ce8b254] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1008 14:23:58.273944  654880 retry.go:31] will retry after 204.950572ms: missing components: kube-dns
	I1008 14:23:58.483515  654880 system_pods.go:86] 8 kube-system pods found
	I1008 14:23:58.483557  654880 system_pods.go:89] "coredns-66bc5c9577-llvkc" [a445b5ef-8d30-4b7c-a40f-77f2a9072e7f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 14:23:58.483566  654880 system_pods.go:89] "etcd-multinode-439307" [1989112b-ab3b-4883-9f2c-19ee41565704] Running
	I1008 14:23:58.483573  654880 system_pods.go:89] "kindnet-l6pqj" [fea0f284-17d4-438c-91a6-14831ce6ce5c] Running
	I1008 14:23:58.483577  654880 system_pods.go:89] "kube-apiserver-multinode-439307" [18f77e80-010e-4779-9028-6093a55219c5] Running
	I1008 14:23:58.483581  654880 system_pods.go:89] "kube-controller-manager-multinode-439307" [f4954c96-43a5-408b-a99e-423ab197e112] Running
	I1008 14:23:58.483586  654880 system_pods.go:89] "kube-proxy-sjzfx" [1211872c-1472-435c-a117-2656ba2fca8e] Running
	I1008 14:23:58.483591  654880 system_pods.go:89] "kube-scheduler-multinode-439307" [a940c86a-bd75-4100-86f5-0b6a53040f2b] Running
	I1008 14:23:58.483605  654880 system_pods.go:89] "storage-provisioner" [e1d410c3-de2a-4e2a-88c1-93970ce8b254] Running
	I1008 14:23:58.483625  654880 system_pods.go:126] duration metric: took 212.832591ms to wait for k8s-apps to be running ...
	I1008 14:23:58.483639  654880 system_svc.go:44] waiting for kubelet service to be running ....
	I1008 14:23:58.483696  654880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 14:23:58.497700  654880 system_svc.go:56] duration metric: took 14.052432ms WaitForService to wait for kubelet
	I1008 14:23:58.497735  654880 kubeadm.go:586] duration metric: took 11.586544695s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 14:23:58.497762  654880 node_conditions.go:102] verifying NodePressure condition ...
	I1008 14:23:58.501151  654880 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1008 14:23:58.501216  654880 node_conditions.go:123] node cpu capacity is 8
	I1008 14:23:58.501243  654880 node_conditions.go:105] duration metric: took 3.474604ms to run NodePressure ...
	I1008 14:23:58.501258  654880 start.go:241] waiting for startup goroutines ...
	I1008 14:23:58.501268  654880 start.go:246] waiting for cluster config update ...
	I1008 14:23:58.501283  654880 start.go:255] writing updated cluster config ...
	I1008 14:23:58.503410  654880 out.go:203] 
	I1008 14:23:58.504758  654880 config.go:182] Loaded profile config "multinode-439307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1008 14:23:58.504834  654880 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/config.json ...
	I1008 14:23:58.506429  654880 out.go:179] * Starting "multinode-439307-m02" worker node in "multinode-439307" cluster
	I1008 14:23:58.508117  654880 cache.go:123] Beginning downloading kic base image for docker with containerd
	I1008 14:23:58.509438  654880 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1008 14:23:58.510664  654880 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1008 14:23:58.510689  654880 cache.go:58] Caching tarball of preloaded images
	I1008 14:23:58.510780  654880 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 14:23:58.510807  654880 preload.go:233] Found /home/jenkins/minikube-integration/21681-513010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1008 14:23:58.510816  654880 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1008 14:23:58.510889  654880 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/config.json ...
	I1008 14:23:58.532250  654880 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1008 14:23:58.532275  654880 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1008 14:23:58.532296  654880 cache.go:232] Successfully downloaded all kic artifacts
	I1008 14:23:58.532333  654880 start.go:360] acquireMachinesLock for multinode-439307-m02: {Name:mkd110918dd178f7f1251cdb6cbe49ec290497a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 14:23:58.532447  654880 start.go:364] duration metric: took 91.76µs to acquireMachinesLock for "multinode-439307-m02"
	I1008 14:23:58.532478  654880 start.go:93] Provisioning new machine with config: &{Name:multinode-439307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:multinode-439307 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:false Worker:true}
	I1008 14:23:58.532562  654880 start.go:125] createHost starting for "m02" (driver="docker")
	I1008 14:23:58.535151  654880 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1008 14:23:58.535282  654880 start.go:159] libmachine.API.Create for "multinode-439307" (driver="docker")
	I1008 14:23:58.535317  654880 client.go:168] LocalClient.Create starting
	I1008 14:23:58.535405  654880 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca.pem
	I1008 14:23:58.535446  654880 main.go:141] libmachine: Decoding PEM data...
	I1008 14:23:58.535467  654880 main.go:141] libmachine: Parsing certificate...
	I1008 14:23:58.535539  654880 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-513010/.minikube/certs/cert.pem
	I1008 14:23:58.535570  654880 main.go:141] libmachine: Decoding PEM data...
	I1008 14:23:58.535600  654880 main.go:141] libmachine: Parsing certificate...
	I1008 14:23:58.535837  654880 cli_runner.go:164] Run: docker network inspect multinode-439307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 14:23:58.553063  654880 network_create.go:77] Found existing network {name:multinode-439307 subnet:0xc00096a0f0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 67 1] mtu:1500}
	I1008 14:23:58.553121  654880 kic.go:121] calculated static IP "192.168.67.3" for the "multinode-439307-m02" container
	I1008 14:23:58.553194  654880 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1008 14:23:58.571642  654880 cli_runner.go:164] Run: docker volume create multinode-439307-m02 --label name.minikube.sigs.k8s.io=multinode-439307-m02 --label created_by.minikube.sigs.k8s.io=true
	I1008 14:23:58.590094  654880 oci.go:103] Successfully created a docker volume multinode-439307-m02
	I1008 14:23:58.590216  654880 cli_runner.go:164] Run: docker run --rm --name multinode-439307-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-439307-m02 --entrypoint /usr/bin/test -v multinode-439307-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1008 14:23:58.980132  654880 oci.go:107] Successfully prepared a docker volume multinode-439307-m02
	I1008 14:23:58.980183  654880 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1008 14:23:58.980210  654880 kic.go:194] Starting extracting preloaded images to volume ...
	I1008 14:23:58.980284  654880 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-513010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-439307-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1008 14:24:03.452942  654880 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-513010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-439307-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.472598209s)
	I1008 14:24:03.452997  654880 kic.go:203] duration metric: took 4.472765246s to extract preloaded images to volume ...
	W1008 14:24:03.453098  654880 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1008 14:24:03.453135  654880 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1008 14:24:03.453189  654880 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1008 14:24:03.514279  654880 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-439307-m02 --name multinode-439307-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-439307-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-439307-m02 --network multinode-439307 --ip 192.168.67.3 --volume multinode-439307-m02:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1008 14:24:03.806322  654880 cli_runner.go:164] Run: docker container inspect multinode-439307-m02 --format={{.State.Running}}
	I1008 14:24:03.825192  654880 cli_runner.go:164] Run: docker container inspect multinode-439307-m02 --format={{.State.Status}}
	I1008 14:24:03.843451  654880 cli_runner.go:164] Run: docker exec multinode-439307-m02 stat /var/lib/dpkg/alternatives/iptables
	I1008 14:24:03.887312  654880 oci.go:144] the created container "multinode-439307-m02" has a running status.
	I1008 14:24:03.887351  654880 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307-m02/id_rsa...
	I1008 14:24:03.981880  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1008 14:24:03.981940  654880 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1008 14:24:04.008560  654880 cli_runner.go:164] Run: docker container inspect multinode-439307-m02 --format={{.State.Status}}
	I1008 14:24:04.028620  654880 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1008 14:24:04.028641  654880 kic_runner.go:114] Args: [docker exec --privileged multinode-439307-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1008 14:24:04.085475  654880 cli_runner.go:164] Run: docker container inspect multinode-439307-m02 --format={{.State.Status}}
	I1008 14:24:04.104162  654880 machine.go:93] provisionDockerMachine start ...
	I1008 14:24:04.104268  654880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307-m02
	I1008 14:24:04.125664  654880 main.go:141] libmachine: Using SSH client type: native
	I1008 14:24:04.126030  654880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33311 <nil> <nil>}
	I1008 14:24:04.126052  654880 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 14:24:04.126862  654880 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34090->127.0.0.1:33311: read: connection reset by peer
	I1008 14:24:07.275164  654880 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-439307-m02
	
	I1008 14:24:07.275197  654880 ubuntu.go:182] provisioning hostname "multinode-439307-m02"
	I1008 14:24:07.275268  654880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307-m02
	I1008 14:24:07.293538  654880 main.go:141] libmachine: Using SSH client type: native
	I1008 14:24:07.293764  654880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33311 <nil> <nil>}
	I1008 14:24:07.293777  654880 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-439307-m02 && echo "multinode-439307-m02" | sudo tee /etc/hostname
	I1008 14:24:07.452309  654880 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-439307-m02
	
	I1008 14:24:07.452395  654880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307-m02
	I1008 14:24:07.470682  654880 main.go:141] libmachine: Using SSH client type: native
	I1008 14:24:07.470904  654880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33311 <nil> <nil>}
	I1008 14:24:07.470926  654880 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-439307-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-439307-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-439307-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 14:24:07.619123  654880 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 14:24:07.619159  654880 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-513010/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-513010/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-513010/.minikube}
	I1008 14:24:07.619176  654880 ubuntu.go:190] setting up certificates
	I1008 14:24:07.619189  654880 provision.go:84] configureAuth start
	I1008 14:24:07.619267  654880 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-439307-m02
	I1008 14:24:07.636645  654880 provision.go:143] copyHostCerts
	I1008 14:24:07.636697  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21681-513010/.minikube/ca.pem
	I1008 14:24:07.636734  654880 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-513010/.minikube/ca.pem, removing ...
	I1008 14:24:07.636744  654880 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-513010/.minikube/ca.pem
	I1008 14:24:07.636809  654880 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-513010/.minikube/ca.pem (1078 bytes)
	I1008 14:24:07.636900  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21681-513010/.minikube/cert.pem
	I1008 14:24:07.636921  654880 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-513010/.minikube/cert.pem, removing ...
	I1008 14:24:07.636925  654880 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-513010/.minikube/cert.pem
	I1008 14:24:07.636953  654880 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-513010/.minikube/cert.pem (1123 bytes)
	I1008 14:24:07.637030  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21681-513010/.minikube/key.pem
	I1008 14:24:07.637053  654880 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-513010/.minikube/key.pem, removing ...
	I1008 14:24:07.637061  654880 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-513010/.minikube/key.pem
	I1008 14:24:07.637088  654880 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-513010/.minikube/key.pem (1675 bytes)
	I1008 14:24:07.637144  654880 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-513010/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca-key.pem org=jenkins.multinode-439307-m02 san=[127.0.0.1 192.168.67.3 localhost minikube multinode-439307-m02]
	I1008 14:24:07.912616  654880 provision.go:177] copyRemoteCerts
	I1008 14:24:07.912701  654880 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 14:24:07.912746  654880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307-m02
	I1008 14:24:07.930775  654880 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33311 SSHKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307-m02/id_rsa Username:docker}
	I1008 14:24:08.036822  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1008 14:24:08.036899  654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1008 14:24:08.057016  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1008 14:24:08.057099  654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1008 14:24:08.075825  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1008 14:24:08.075887  654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 14:24:08.094562  654880 provision.go:87] duration metric: took 475.356058ms to configureAuth
	I1008 14:24:08.094595  654880 ubuntu.go:206] setting minikube options for container-runtime
	I1008 14:24:08.094805  654880 config.go:182] Loaded profile config "multinode-439307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1008 14:24:08.094818  654880 machine.go:96] duration metric: took 3.990634645s to provisionDockerMachine
	I1008 14:24:08.094825  654880 client.go:171] duration metric: took 9.55949919s to LocalClient.Create
	I1008 14:24:08.094846  654880 start.go:167] duration metric: took 9.559564892s to libmachine.API.Create "multinode-439307"
	I1008 14:24:08.094856  654880 start.go:293] postStartSetup for "multinode-439307-m02" (driver="docker")
	I1008 14:24:08.094864  654880 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 14:24:08.094910  654880 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 14:24:08.094953  654880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307-m02
	I1008 14:24:08.112924  654880 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33311 SSHKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307-m02/id_rsa Username:docker}
	I1008 14:24:08.218693  654880 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 14:24:08.222553  654880 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 14:24:08.222590  654880 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 14:24:08.222601  654880 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-513010/.minikube/addons for local assets ...
	I1008 14:24:08.222660  654880 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-513010/.minikube/files for local assets ...
	I1008 14:24:08.222816  654880 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-513010/.minikube/files/etc/ssl/certs/5167872.pem -> 5167872.pem in /etc/ssl/certs
	I1008 14:24:08.222833  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/files/etc/ssl/certs/5167872.pem -> /etc/ssl/certs/5167872.pem
	I1008 14:24:08.222964  654880 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 14:24:08.231254  654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/files/etc/ssl/certs/5167872.pem --> /etc/ssl/certs/5167872.pem (1708 bytes)
	I1008 14:24:08.252383  654880 start.go:296] duration metric: took 157.508647ms for postStartSetup
	I1008 14:24:08.252769  654880 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-439307-m02
	I1008 14:24:08.270607  654880 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/config.json ...
	I1008 14:24:08.270881  654880 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 14:24:08.270929  654880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307-m02
	I1008 14:24:08.288967  654880 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33311 SSHKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307-m02/id_rsa Username:docker}
	I1008 14:24:08.390387  654880 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 14:24:08.395431  654880 start.go:128] duration metric: took 9.862849739s to createHost
	I1008 14:24:08.395464  654880 start.go:83] releasing machines lock for "multinode-439307-m02", held for 9.863003309s
	I1008 14:24:08.395547  654880 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-439307-m02
	I1008 14:24:08.415924  654880 out.go:179] * Found network options:
	I1008 14:24:08.417255  654880 out.go:179]   - NO_PROXY=192.168.67.2
	W1008 14:24:08.418465  654880 proxy.go:120] fail to check proxy env: Error ip not in block
	W1008 14:24:08.418511  654880 proxy.go:120] fail to check proxy env: Error ip not in block
	I1008 14:24:08.418612  654880 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1008 14:24:08.418625  654880 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 14:24:08.418653  654880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307-m02
	I1008 14:24:08.418693  654880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307-m02
	I1008 14:24:08.439832  654880 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33311 SSHKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307-m02/id_rsa Username:docker}
	I1008 14:24:08.440289  654880 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33311 SSHKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307-m02/id_rsa Username:docker}
	W1008 14:24:08.596782  654880 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 14:24:08.596862  654880 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 14:24:08.623270  654880 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 14:24:08.623295  654880 start.go:495] detecting cgroup driver to use...
	I1008 14:24:08.623333  654880 detect.go:190] detected "systemd" cgroup driver on host os
	I1008 14:24:08.623386  654880 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1008 14:24:08.638627  654880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1008 14:24:08.651897  654880 docker.go:218] disabling cri-docker service (if available) ...
	I1008 14:24:08.651966  654880 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 14:24:08.670277  654880 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 14:24:08.688725  654880 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 14:24:08.771633  654880 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 14:24:08.860938  654880 docker.go:234] disabling docker service ...
	I1008 14:24:08.861030  654880 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 14:24:08.880549  654880 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 14:24:08.894395  654880 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 14:24:08.979782  654880 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 14:24:09.065757  654880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 14:24:09.079136  654880 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 14:24:09.095338  654880 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1008 14:24:09.107275  654880 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1008 14:24:09.117636  654880 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1008 14:24:09.117701  654880 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1008 14:24:09.127943  654880 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1008 14:24:09.138714  654880 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1008 14:24:09.148727  654880 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1008 14:24:09.158882  654880 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 14:24:09.168295  654880 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1008 14:24:09.178665  654880 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1008 14:24:09.188393  654880 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1008 14:24:09.198424  654880 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 14:24:09.206454  654880 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 14:24:09.215144  654880 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 14:24:09.294927  654880 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1008 14:24:09.407140  654880 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1008 14:24:09.407220  654880 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1008 14:24:09.411681  654880 start.go:563] Will wait 60s for crictl version
	I1008 14:24:09.411754  654880 ssh_runner.go:195] Run: which crictl
	I1008 14:24:09.415949  654880 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 14:24:09.443331  654880 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.28
	RuntimeApiVersion:  v1
	I1008 14:24:09.443406  654880 ssh_runner.go:195] Run: containerd --version
	I1008 14:24:09.469419  654880 ssh_runner.go:195] Run: containerd --version
	I1008 14:24:09.496238  654880 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 1.7.28 ...
	I1008 14:24:09.497613  654880 out.go:179]   - env NO_PROXY=192.168.67.2
	I1008 14:24:09.498926  654880 cli_runner.go:164] Run: docker network inspect multinode-439307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 14:24:09.517143  654880 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I1008 14:24:09.521732  654880 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 14:24:09.533155  654880 mustload.go:65] Loading cluster: multinode-439307
	I1008 14:24:09.533379  654880 config.go:182] Loaded profile config "multinode-439307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1008 14:24:09.533664  654880 cli_runner.go:164] Run: docker container inspect multinode-439307 --format={{.State.Status}}
	I1008 14:24:09.552397  654880 host.go:66] Checking if "multinode-439307" exists ...
	I1008 14:24:09.552676  654880 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307 for IP: 192.168.67.3
	I1008 14:24:09.552690  654880 certs.go:195] generating shared ca certs ...
	I1008 14:24:09.552707  654880 certs.go:227] acquiring lock for ca certs: {Name:mk57aa9b2383fcc0908491da1ce926c707ff69a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:24:09.552825  654880 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-513010/.minikube/ca.key
	I1008 14:24:09.552870  654880 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-513010/.minikube/proxy-client-ca.key
	I1008 14:24:09.552884  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1008 14:24:09.552899  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1008 14:24:09.552911  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1008 14:24:09.552921  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1008 14:24:09.553005  654880 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/516787.pem (1338 bytes)
	W1008 14:24:09.553040  654880 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-513010/.minikube/certs/516787_empty.pem, impossibly tiny 0 bytes
	I1008 14:24:09.553048  654880 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 14:24:09.553076  654880 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca.pem (1078 bytes)
	I1008 14:24:09.553109  654880 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/cert.pem (1123 bytes)
	I1008 14:24:09.553130  654880 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/key.pem (1675 bytes)
	I1008 14:24:09.553168  654880 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-513010/.minikube/files/etc/ssl/certs/5167872.pem (1708 bytes)
	I1008 14:24:09.553193  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/516787.pem -> /usr/share/ca-certificates/516787.pem
	I1008 14:24:09.553207  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/files/etc/ssl/certs/5167872.pem -> /usr/share/ca-certificates/5167872.pem
	I1008 14:24:09.553222  654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:24:09.553242  654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 14:24:09.573504  654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1008 14:24:09.592232  654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 14:24:09.610884  654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1008 14:24:09.630003  654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/certs/516787.pem --> /usr/share/ca-certificates/516787.pem (1338 bytes)
	I1008 14:24:09.653800  654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/files/etc/ssl/certs/5167872.pem --> /usr/share/ca-certificates/5167872.pem (1708 bytes)
	I1008 14:24:09.675803  654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 14:24:09.695568  654880 ssh_runner.go:195] Run: openssl version
	I1008 14:24:09.702733  654880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/516787.pem && ln -fs /usr/share/ca-certificates/516787.pem /etc/ssl/certs/516787.pem"
	I1008 14:24:09.712131  654880 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/516787.pem
	I1008 14:24:09.716287  654880 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 14:09 /usr/share/ca-certificates/516787.pem
	I1008 14:24:09.716357  654880 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/516787.pem
	I1008 14:24:09.752537  654880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/516787.pem /etc/ssl/certs/51391683.0"
	I1008 14:24:09.762173  654880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5167872.pem && ln -fs /usr/share/ca-certificates/5167872.pem /etc/ssl/certs/5167872.pem"
	I1008 14:24:09.772303  654880 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5167872.pem
	I1008 14:24:09.776649  654880 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 14:09 /usr/share/ca-certificates/5167872.pem
	I1008 14:24:09.776712  654880 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5167872.pem
	I1008 14:24:09.812619  654880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5167872.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 14:24:09.823098  654880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 14:24:09.832190  654880 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:24:09.836566  654880 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 14:03 /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:24:09.836631  654880 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:24:09.871385  654880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 14:24:09.881326  654880 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 14:24:09.885609  654880 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1008 14:24:09.885678  654880 kubeadm.go:934] updating node {m02 192.168.67.3 8443 v1.34.1 containerd false true} ...
	I1008 14:24:09.885785  654880 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-439307-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:multinode-439307 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 14:24:09.885854  654880 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 14:24:09.894180  654880 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 14:24:09.894257  654880 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1008 14:24:09.902472  654880 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I1008 14:24:09.916134  654880 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 14:24:09.931662  654880 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I1008 14:24:09.935628  654880 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 14:24:09.946151  654880 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 14:24:10.025257  654880 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 14:24:10.052607  654880 host.go:66] Checking if "multinode-439307" exists ...
	I1008 14:24:10.052868  654880 start.go:317] joinCluster: &{Name:multinode-439307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:multinode-439307 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.67.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSi
ze:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 14:24:10.052965  654880 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1008 14:24:10.053040  654880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307
	I1008 14:24:10.072940  654880 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33306 SSHKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307/id_rsa Username:docker}
	I1008 14:24:10.226647  654880 start.go:343] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.67.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:false Worker:true}
	I1008 14:24:10.226740  654880 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 4ut921.623axv37vw0z44c2 --discovery-token-ca-cert-hash sha256:29d08006b7495c78b5f27aaa9701a82f373226e18456de05d156e89bccfbd06f --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=multinode-439307-m02"
	I1008 14:24:11.499926  654880 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 4ut921.623axv37vw0z44c2 --discovery-token-ca-cert-hash sha256:29d08006b7495c78b5f27aaa9701a82f373226e18456de05d156e89bccfbd06f --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=multinode-439307-m02": (1.273161843s)
	I1008 14:24:11.500025  654880 ssh_runner.go:195] Run: sudo /bin/bash -c "systemctl daemon-reload && systemctl enable kubelet && systemctl start kubelet"
	I1008 14:24:11.684824  654880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-439307-m02 minikube.k8s.io/updated_at=2025_10_08T14_24_11_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9090b00fbf2832bf29026571965024d88b63d555 minikube.k8s.io/name=multinode-439307 minikube.k8s.io/primary=false
	I1008 14:24:11.757264  654880 start.go:319] duration metric: took 1.704388689s to joinCluster
	I1008 14:24:11.757362  654880 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.67.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:false Worker:true}
	I1008 14:24:11.757686  654880 config.go:182] Loaded profile config "multinode-439307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1008 14:24:11.759939  654880 out.go:179] * Verifying Kubernetes components...
	I1008 14:24:11.761383  654880 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 14:24:11.853236  654880 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 14:24:11.868476  654880 kapi.go:59] client config for multinode-439307: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/client.crt", KeyFile:"/home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/client.key", CAFile:"/home/jenkins/minikube-integration/21681-513010/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1008 14:24:11.868890  654880 node_ready.go:35] waiting up to 6m0s for node "multinode-439307-m02" to be "Ready" ...
	W1008 14:24:13.872620  654880 node_ready.go:57] node "multinode-439307-m02" has "Ready":"False" status (will retry)
	W1008 14:24:16.372273  654880 node_ready.go:57] node "multinode-439307-m02" has "Ready":"False" status (will retry)
	W1008 14:24:18.372477  654880 node_ready.go:57] node "multinode-439307-m02" has "Ready":"False" status (will retry)
	W1008 14:24:20.372540  654880 node_ready.go:57] node "multinode-439307-m02" has "Ready":"False" status (will retry)
	W1008 14:24:22.872160  654880 node_ready.go:57] node "multinode-439307-m02" has "Ready":"False" status (will retry)
	I1008 14:24:24.371830  654880 node_ready.go:49] node "multinode-439307-m02" is "Ready"
	I1008 14:24:24.371861  654880 node_ready.go:38] duration metric: took 12.502945701s for node "multinode-439307-m02" to be "Ready" ...
	I1008 14:24:24.371877  654880 system_svc.go:44] waiting for kubelet service to be running ....
	I1008 14:24:24.371923  654880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 14:24:24.385754  654880 system_svc.go:56] duration metric: took 13.866509ms WaitForService to wait for kubelet
	I1008 14:24:24.385788  654880 kubeadm.go:586] duration metric: took 12.628395274s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 14:24:24.385819  654880 node_conditions.go:102] verifying NodePressure condition ...
	I1008 14:24:24.388606  654880 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1008 14:24:24.388634  654880 node_conditions.go:123] node cpu capacity is 8
	I1008 14:24:24.388647  654880 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1008 14:24:24.388663  654880 node_conditions.go:123] node cpu capacity is 8
	I1008 14:24:24.388668  654880 node_conditions.go:105] duration metric: took 2.843574ms to run NodePressure ...
	I1008 14:24:24.388679  654880 start.go:241] waiting for startup goroutines ...
	I1008 14:24:24.388715  654880 start.go:255] writing updated cluster config ...
	I1008 14:24:24.389017  654880 ssh_runner.go:195] Run: rm -f paused
	I1008 14:24:24.393052  654880 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1008 14:24:24.393669  654880 kapi.go:59] client config for multinode-439307: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/client.crt", KeyFile:"/home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/client.key", CAFile:"/home/jenkins/minikube-integration/21681-513010/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1008 14:24:24.396852  654880 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-llvkc" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 14:24:24.401377  654880 pod_ready.go:94] pod "coredns-66bc5c9577-llvkc" is "Ready"
	I1008 14:24:24.401408  654880 pod_ready.go:86] duration metric: took 4.533488ms for pod "coredns-66bc5c9577-llvkc" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 14:24:24.403808  654880 pod_ready.go:83] waiting for pod "etcd-multinode-439307" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 14:24:24.407791  654880 pod_ready.go:94] pod "etcd-multinode-439307" is "Ready"
	I1008 14:24:24.407814  654880 pod_ready.go:86] duration metric: took 3.984727ms for pod "etcd-multinode-439307" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 14:24:24.410014  654880 pod_ready.go:83] waiting for pod "kube-apiserver-multinode-439307" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 14:24:24.414225  654880 pod_ready.go:94] pod "kube-apiserver-multinode-439307" is "Ready"
	I1008 14:24:24.414249  654880 pod_ready.go:86] duration metric: took 4.210762ms for pod "kube-apiserver-multinode-439307" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 14:24:24.416187  654880 pod_ready.go:83] waiting for pod "kube-controller-manager-multinode-439307" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 14:24:24.594705  654880 request.go:683] "Waited before sending request" delay="178.359169ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-439307"
	I1008 14:24:24.795096  654880 request.go:683] "Waited before sending request" delay="197.360136ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.67.2:8443/api/v1/nodes/multinode-439307"
	I1008 14:24:24.797827  654880 pod_ready.go:94] pod "kube-controller-manager-multinode-439307" is "Ready"
	I1008 14:24:24.797865  654880 pod_ready.go:86] duration metric: took 381.656304ms for pod "kube-controller-manager-multinode-439307" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 14:24:24.994392  654880 request.go:683] "Waited before sending request" delay="196.347363ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I1008 14:24:24.998079  654880 pod_ready.go:83] waiting for pod "kube-proxy-djg8q" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 14:24:25.194583  654880 request.go:683] "Waited before sending request" delay="196.367193ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-djg8q"
	I1008 14:24:25.395013  654880 request.go:683] "Waited before sending request" delay="197.398426ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.67.2:8443/api/v1/nodes/multinode-439307-m02"
	I1008 14:24:25.397572  654880 pod_ready.go:94] pod "kube-proxy-djg8q" is "Ready"
	I1008 14:24:25.397604  654880 pod_ready.go:86] duration metric: took 399.496213ms for pod "kube-proxy-djg8q" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 14:24:25.397618  654880 pod_ready.go:83] waiting for pod "kube-proxy-sjzfx" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 14:24:25.595137  654880 request.go:683] "Waited before sending request" delay="197.409064ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sjzfx"
	I1008 14:24:25.794319  654880 request.go:683] "Waited before sending request" delay="196.312301ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.67.2:8443/api/v1/nodes/multinode-439307"
	I1008 14:24:25.797345  654880 pod_ready.go:94] pod "kube-proxy-sjzfx" is "Ready"
	I1008 14:24:25.797374  654880 pod_ready.go:86] duration metric: took 399.749677ms for pod "kube-proxy-sjzfx" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 14:24:25.994958  654880 request.go:683] "Waited before sending request" delay="197.435121ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-scheduler"
	I1008 14:24:25.997593  654880 pod_ready.go:83] waiting for pod "kube-scheduler-multinode-439307" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 14:24:26.195068  654880 request.go:683] "Waited before sending request" delay="197.36444ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-439307"
	I1008 14:24:26.395200  654880 request.go:683] "Waited before sending request" delay="197.229852ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.67.2:8443/api/v1/nodes/multinode-439307"
	I1008 14:24:26.397809  654880 pod_ready.go:94] pod "kube-scheduler-multinode-439307" is "Ready"
	I1008 14:24:26.397834  654880 pod_ready.go:86] duration metric: took 400.216835ms for pod "kube-scheduler-multinode-439307" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 14:24:26.397846  654880 pod_ready.go:40] duration metric: took 2.004759901s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1008 14:24:26.444090  654880 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1008 14:24:26.446553  654880 out.go:179] * Done! kubectl is now configured to use "multinode-439307" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                        NAMESPACE
	3c70355249fcd       8c811b4aec35f       15 seconds ago       Running             busybox                   0                   f6bf249387eaa       busybox-7b57f96db7-n6rvn                   default
	4ea1a37f26c9f       52546a367cc9e       46 seconds ago       Running             coredns                   0                   d809f9cba67fd       coredns-66bc5c9577-llvkc                   kube-system
	1ab8655881512       6e38f40d628db       46 seconds ago       Running             storage-provisioner       0                   e2da4323cdf8d       storage-provisioner                        kube-system
	eb44427aa7b68       409467f978b4a       57 seconds ago       Running             kindnet-cni               0                   470cdd7a7920c       kindnet-l6pqj                              kube-system
	70d5305f9c0f1       fc25172553d79       57 seconds ago       Running             kube-proxy                0                   734361aeebab7       kube-proxy-sjzfx                           kube-system
	c5ef7b607ae59       5f1f5298c888d       About a minute ago   Running             etcd                      0                   627ec39143d66       etcd-multinode-439307                      kube-system
	7bc5378271f6e       c80c8dbafe7dd       About a minute ago   Running             kube-controller-manager   0                   887929a790edf       kube-controller-manager-multinode-439307   kube-system
	4023d943508d7       7dd6aaa1717ab       About a minute ago   Running             kube-scheduler            0                   9fb7378888c6d       kube-scheduler-multinode-439307            kube-system
	a75297140a138       c3994bc696102       About a minute ago   Running             kube-apiserver            0                   db9ed929a6258       kube-apiserver-multinode-439307            kube-system
	
	
	==> containerd <==
	Oct 08 14:23:58 multinode-439307 containerd[665]: time="2025-10-08T14:23:58.109269861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-llvkc,Uid:a445b5ef-8d30-4b7c-a40f-77f2a9072e7f,Namespace:kube-system,Attempt:0,}"
	Oct 08 14:23:58 multinode-439307 containerd[665]: time="2025-10-08T14:23:58.111413366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:e1d410c3-de2a-4e2a-88c1-93970ce8b254,Namespace:kube-system,Attempt:0,}"
	Oct 08 14:23:58 multinode-439307 containerd[665]: time="2025-10-08T14:23:58.205307729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:e1d410c3-de2a-4e2a-88c1-93970ce8b254,Namespace:kube-system,Attempt:0,} returns sandbox id \"e2da4323cdf8d7d7b3931ff8c336a482dc0cc57329950586094267711d1b74ae\""
	Oct 08 14:23:58 multinode-439307 containerd[665]: time="2025-10-08T14:23:58.210861374Z" level=info msg="CreateContainer within sandbox \"e2da4323cdf8d7d7b3931ff8c336a482dc0cc57329950586094267711d1b74ae\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	Oct 08 14:23:58 multinode-439307 containerd[665]: time="2025-10-08T14:23:58.212899403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-llvkc,Uid:a445b5ef-8d30-4b7c-a40f-77f2a9072e7f,Namespace:kube-system,Attempt:0,} returns sandbox id \"d809f9cba67fd85761b8285b149afbb37772e4b710c2445e0b7d5cf977684afa\""
	Oct 08 14:23:58 multinode-439307 containerd[665]: time="2025-10-08T14:23:58.217454502Z" level=info msg="CreateContainer within sandbox \"d809f9cba67fd85761b8285b149afbb37772e4b710c2445e0b7d5cf977684afa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Oct 08 14:23:58 multinode-439307 containerd[665]: time="2025-10-08T14:23:58.224178373Z" level=info msg="CreateContainer within sandbox \"e2da4323cdf8d7d7b3931ff8c336a482dc0cc57329950586094267711d1b74ae\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"1ab8655881512f6c4b619c636eee1de03f57f734cce6fdc4604bae23d671ab17\""
	Oct 08 14:23:58 multinode-439307 containerd[665]: time="2025-10-08T14:23:58.224770434Z" level=info msg="StartContainer for \"1ab8655881512f6c4b619c636eee1de03f57f734cce6fdc4604bae23d671ab17\""
	Oct 08 14:23:58 multinode-439307 containerd[665]: time="2025-10-08T14:23:58.229804859Z" level=info msg="CreateContainer within sandbox \"d809f9cba67fd85761b8285b149afbb37772e4b710c2445e0b7d5cf977684afa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4ea1a37f26c9f5494351b59d206a47409262ce838a0524bce03e8da1debb8116\""
	Oct 08 14:23:58 multinode-439307 containerd[665]: time="2025-10-08T14:23:58.230405753Z" level=info msg="StartContainer for \"4ea1a37f26c9f5494351b59d206a47409262ce838a0524bce03e8da1debb8116\""
	Oct 08 14:23:58 multinode-439307 containerd[665]: time="2025-10-08T14:23:58.288088455Z" level=info msg="StartContainer for \"1ab8655881512f6c4b619c636eee1de03f57f734cce6fdc4604bae23d671ab17\" returns successfully"
	Oct 08 14:23:58 multinode-439307 containerd[665]: time="2025-10-08T14:23:58.302363146Z" level=info msg="StartContainer for \"4ea1a37f26c9f5494351b59d206a47409262ce838a0524bce03e8da1debb8116\" returns successfully"
	Oct 08 14:24:27 multinode-439307 containerd[665]: time="2025-10-08T14:24:27.431929318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-7b57f96db7-n6rvn,Uid:48d40e87-f7eb-4886-84ea-0d1c344bcef4,Namespace:default,Attempt:0,}"
	Oct 08 14:24:27 multinode-439307 containerd[665]: time="2025-10-08T14:24:27.524294263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-7b57f96db7-n6rvn,Uid:48d40e87-f7eb-4886-84ea-0d1c344bcef4,Namespace:default,Attempt:0,} returns sandbox id \"f6bf249387eaaf48dfa1cfac0cb2eb3646b9e2075be5c9397d97b91ceb9f7c69\""
	Oct 08 14:24:27 multinode-439307 containerd[665]: time="2025-10-08T14:24:27.526837991Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28\""
	Oct 08 14:24:28 multinode-439307 containerd[665]: time="2025-10-08T14:24:28.786377714Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Oct 08 14:24:28 multinode-439307 containerd[665]: time="2025-10-08T14:24:28.787080125Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28: active requests=0, bytes read=727667"
	Oct 08 14:24:28 multinode-439307 containerd[665]: time="2025-10-08T14:24:28.788316089Z" level=info msg="ImageCreate event name:\"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Oct 08 14:24:28 multinode-439307 containerd[665]: time="2025-10-08T14:24:28.790697278Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Oct 08 14:24:28 multinode-439307 containerd[665]: time="2025-10-08T14:24:28.791452472Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28\" with image id \"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\", repo tag \"gcr.io/k8s-minikube/busybox:1.28\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12\", size \"725911\" in 1.264570757s"
	Oct 08 14:24:28 multinode-439307 containerd[665]: time="2025-10-08T14:24:28.791498077Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28\" returns image reference \"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\""
	Oct 08 14:24:28 multinode-439307 containerd[665]: time="2025-10-08T14:24:28.798301525Z" level=info msg="CreateContainer within sandbox \"f6bf249387eaaf48dfa1cfac0cb2eb3646b9e2075be5c9397d97b91ceb9f7c69\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Oct 08 14:24:28 multinode-439307 containerd[665]: time="2025-10-08T14:24:28.808156445Z" level=info msg="CreateContainer within sandbox \"f6bf249387eaaf48dfa1cfac0cb2eb3646b9e2075be5c9397d97b91ceb9f7c69\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"3c70355249fcd2e6ee6d118b75c6bc3546058b18b6aeb6dce0b1b702d096ac47\""
	Oct 08 14:24:28 multinode-439307 containerd[665]: time="2025-10-08T14:24:28.809029634Z" level=info msg="StartContainer for \"3c70355249fcd2e6ee6d118b75c6bc3546058b18b6aeb6dce0b1b702d096ac47\""
	Oct 08 14:24:28 multinode-439307 containerd[665]: time="2025-10-08T14:24:28.869302769Z" level=info msg="StartContainer for \"3c70355249fcd2e6ee6d118b75c6bc3546058b18b6aeb6dce0b1b702d096ac47\" returns successfully"
	
	
	==> coredns [4ea1a37f26c9f5494351b59d206a47409262ce838a0524bce03e8da1debb8116] <==
	[INFO] 10.244.1.2:46440 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125237s
	[INFO] 10.244.0.3:32802 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000173381s
	[INFO] 10.244.0.3:52099 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000116887s
	[INFO] 10.244.0.3:55009 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000158031s
	[INFO] 10.244.0.3:52826 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00015101s
	[INFO] 10.244.0.3:36042 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.00007137s
	[INFO] 10.244.0.3:51029 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001339s
	[INFO] 10.244.0.3:58795 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000130735s
	[INFO] 10.244.0.3:47967 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000075412s
	[INFO] 10.244.1.2:39882 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00025259s
	[INFO] 10.244.1.2:52814 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000218308s
	[INFO] 10.244.1.2:37521 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000148655s
	[INFO] 10.244.1.2:42486 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00011547s
	[INFO] 10.244.0.3:44143 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169188s
	[INFO] 10.244.0.3:48380 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000235742s
	[INFO] 10.244.0.3:43850 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000155536s
	[INFO] 10.244.0.3:49494 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000093677s
	[INFO] 10.244.1.2:59241 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000198155s
	[INFO] 10.244.1.2:55245 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000162203s
	[INFO] 10.244.1.2:33545 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000110828s
	[INFO] 10.244.1.2:36918 - 5 "PTR IN 1.67.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000139385s
	[INFO] 10.244.0.3:59030 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000160386s
	[INFO] 10.244.0.3:44681 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000139946s
	[INFO] 10.244.0.3:37620 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000098444s
	[INFO] 10.244.0.3:59659 - 5 "PTR IN 1.67.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000066524s
	
	
	==> describe nodes <==
	Name:               multinode-439307
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-439307
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9090b00fbf2832bf29026571965024d88b63d555
	                    minikube.k8s.io/name=multinode-439307
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_08T14_23_42_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 08 Oct 2025 14:23:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-439307
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 08 Oct 2025 14:24:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 08 Oct 2025 14:24:42 +0000   Wed, 08 Oct 2025 14:23:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 08 Oct 2025 14:24:42 +0000   Wed, 08 Oct 2025 14:23:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 08 Oct 2025 14:24:42 +0000   Wed, 08 Oct 2025 14:23:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 08 Oct 2025 14:24:42 +0000   Wed, 08 Oct 2025 14:23:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    multinode-439307
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 56d3e6862fcc45b48f25bde7f561b1d7
	  System UUID:                3ecc1d83-e69e-4927-aebb-a9dcae9475e4
	  Boot ID:                    5fdbec2a-e754-47ce-9745-1553567d6c31
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.28
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-n6rvn                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         17s
	  kube-system                 coredns-66bc5c9577-llvkc                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     58s
	  kube-system                 etcd-multinode-439307                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         63s
	  kube-system                 kindnet-l6pqj                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      58s
	  kube-system                 kube-apiserver-multinode-439307             250m (3%)     0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 kube-controller-manager-multinode-439307    200m (2%)     0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 kube-proxy-sjzfx                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	  kube-system                 kube-scheduler-multinode-439307             100m (1%)     0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 57s                kube-proxy       
	  Normal  Starting                 68s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  68s (x8 over 68s)  kubelet          Node multinode-439307 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    68s (x8 over 68s)  kubelet          Node multinode-439307 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     68s (x7 over 68s)  kubelet          Node multinode-439307 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  68s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 63s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  63s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  63s                kubelet          Node multinode-439307 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    63s                kubelet          Node multinode-439307 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     63s                kubelet          Node multinode-439307 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           59s                node-controller  Node multinode-439307 event: Registered Node multinode-439307 in Controller
	  Normal  NodeReady                47s                kubelet          Node multinode-439307 status is now: NodeReady
	
	
	Name:               multinode-439307-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-439307-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9090b00fbf2832bf29026571965024d88b63d555
	                    minikube.k8s.io/name=multinode-439307
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_08T14_24_11_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 08 Oct 2025 14:24:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-439307-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 08 Oct 2025 14:24:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 08 Oct 2025 14:24:41 +0000   Wed, 08 Oct 2025 14:24:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 08 Oct 2025 14:24:41 +0000   Wed, 08 Oct 2025 14:24:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 08 Oct 2025 14:24:41 +0000   Wed, 08 Oct 2025 14:24:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 08 Oct 2025 14:24:41 +0000   Wed, 08 Oct 2025 14:24:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.3
	  Hostname:    multinode-439307-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 8b74ec156e614a3fac7c415130ea0397
	  System UUID:                ab0bc412-83f7-4153-b57d-32510d60dd56
	  Boot ID:                    5fdbec2a-e754-47ce-9745-1553567d6c31
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.28
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-9qspn    0 (0%)        0 (0%)      0 (0%)           0 (0%)         17s
	  kube-system                 kindnet-wch5j               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      33s
	  kube-system                 kube-proxy-djg8q            0 (0%)        0 (0%)      0 (0%)           0 (0%)         33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%)  100m (1%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 30s                kube-proxy       
	  Normal  NodeHasSufficientMemory  33s (x3 over 33s)  kubelet          Node multinode-439307-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    33s (x3 over 33s)  kubelet          Node multinode-439307-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     33s (x3 over 33s)  kubelet          Node multinode-439307-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  33s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           29s                node-controller  Node multinode-439307-m02 event: Registered Node multinode-439307-m02 in Controller
	  Normal  NodeReady                20s                kubelet          Node multinode-439307-m02 status is now: NodeReady
	
	
	Name:               multinode-439307-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-439307-m03
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 08 Oct 2025 14:24:41 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:              Failed to get lease: leases.coordination.k8s.io "multinode-439307-m03" not found
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 08 Oct 2025 14:24:41 +0000   Wed, 08 Oct 2025 14:24:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 08 Oct 2025 14:24:41 +0000   Wed, 08 Oct 2025 14:24:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 08 Oct 2025 14:24:41 +0000   Wed, 08 Oct 2025 14:24:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 08 Oct 2025 14:24:41 +0000   Wed, 08 Oct 2025 14:24:41 +0000   KubeletNotReady              [container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized, CSINode is not yet initialized]
	Addresses:
	  InternalIP:  192.168.67.4
	  Hostname:    multinode-439307-m03
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 cb5019628fa5415a9a6de65b61b0aa10
	  System UUID:                4c1a693e-f511-45ca-9c03-2a547007f3cb
	  Boot ID:                    5fdbec2a-e754-47ce-9745-1553567d6c31
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.28
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-58vm5       100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3s
	  kube-system                 kube-proxy-fs89g    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%)  100m (1%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age              From        Message
	  ----    ------                   ----             ----        -------
	  Normal  Starting                 0s               kube-proxy  
	  Normal  NodeHasSufficientMemory  3s (x3 over 3s)  kubelet     Node multinode-439307-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3s (x3 over 3s)  kubelet     Node multinode-439307-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3s (x3 over 3s)  kubelet     Node multinode-439307-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3s               kubelet     Updated Node Allocatable limit across pods
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e6 26 b3 37 bf 19 08 06
	[  +0.000410] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3e 1c 28 4b 91 c9 08 06
	[Oct 8 13:59] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff aa 16 3f fe bd b6 08 06
	[  +0.044604] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ea 40 7d d0 6d a6 08 06
	[ +10.339808] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff f2 86 26 6c 97 dc 08 06
	[  +2.975774] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 2a 61 e9 d6 10 e3 08 06
	[  +0.101555] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ea fa 29 51 08 ac 08 06
	[ +30.965246] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e 37 46 57 22 c1 08 06
	[  +0.000341] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ea 40 7d d0 6d a6 08 06
	[Oct 8 14:00] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 9c 9c 72 fb 11 08 06
	[  +0.000628] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea fa 29 51 08 ac 08 06
	[  +2.730130] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d6 a4 4e 39 b9 db 08 06
	[  +0.000456] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 86 26 6c 97 dc 08 06
	
	
	==> etcd [c5ef7b607ae59f8f6aeebf4ab11b5560d14e184780133f6a6973d2dc59d69c2c] <==
	{"level":"warn","ts":"2025-10-08T14:23:38.147870Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T14:23:38.154263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T14:23:38.163022Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T14:23:38.169346Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T14:23:38.175903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T14:23:38.182506Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T14:23:38.188820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T14:23:38.195857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T14:23:38.202025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T14:23:38.208239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T14:23:38.221089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T14:23:38.228181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T14:23:38.235952Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T14:23:38.249350Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T14:23:38.257305Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T14:23:38.263748Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T14:23:38.269837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T14:23:38.276382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T14:23:38.282566Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T14:23:38.288832Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T14:23:38.302605Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T14:23:38.308859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T14:23:38.315043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T14:23:38.361302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T14:24:35.554339Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"132.090033ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2289968003519192253 > lease_revoke:<id:1fc799c434c59c06>","response":"size:29"}
	
	
	==> kernel <==
	 14:24:44 up  2:07,  0 user,  load average: 1.10, 1.49, 1.84
	Linux multinode-439307 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [eb44427aa7b68d0cb5246a5d10b69e69a310ad7dbe803f32fbfe929362b00e9b] <==
	time="2025-10-08T14:23:47Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1008 14:23:47.690096       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1008 14:23:47.690125       1 controller.go:381] "Waiting for informer caches to sync"
	I1008 14:23:47.690138       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1008 14:23:47.690279       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1008 14:23:48.090594       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1008 14:23:48.090626       1 metrics.go:72] Registering metrics
	I1008 14:23:48.090682       1 controller.go:711] "Syncing nftables rules"
	I1008 14:23:57.691180       1 main.go:297] Handling node with IPs: map[192.168.67.2:{}]
	I1008 14:23:57.691253       1 main.go:301] handling current node
	I1008 14:24:07.697046       1 main.go:297] Handling node with IPs: map[192.168.67.2:{}]
	I1008 14:24:07.697089       1 main.go:301] handling current node
	I1008 14:24:17.690952       1 main.go:297] Handling node with IPs: map[192.168.67.3:{}]
	I1008 14:24:17.691012       1 main.go:324] Node multinode-439307-m02 has CIDR [10.244.1.0/24] 
	I1008 14:24:17.691311       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.67.3 Flags: [] Table: 0 Realm: 0} 
	I1008 14:24:17.691488       1 main.go:297] Handling node with IPs: map[192.168.67.2:{}]
	I1008 14:24:17.691506       1 main.go:301] handling current node
	I1008 14:24:27.690151       1 main.go:297] Handling node with IPs: map[192.168.67.2:{}]
	I1008 14:24:27.690209       1 main.go:301] handling current node
	I1008 14:24:27.690224       1 main.go:297] Handling node with IPs: map[192.168.67.3:{}]
	I1008 14:24:27.690228       1 main.go:324] Node multinode-439307-m02 has CIDR [10.244.1.0/24] 
	I1008 14:24:37.696064       1 main.go:297] Handling node with IPs: map[192.168.67.2:{}]
	I1008 14:24:37.696102       1 main.go:301] handling current node
	I1008 14:24:37.696118       1 main.go:297] Handling node with IPs: map[192.168.67.3:{}]
	I1008 14:24:37.696123       1 main.go:324] Node multinode-439307-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [a75297140a13849f0bbb8691fcb7ec90b635a193300494f88d6ee8bb6961ae9a] <==
	I1008 14:23:39.722440       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1008 14:23:40.200145       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1008 14:23:40.237907       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1008 14:23:40.325514       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1008 14:23:40.331916       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.67.2]
	I1008 14:23:40.333100       1 controller.go:667] quota admission added evaluator for: endpoints
	I1008 14:23:40.337576       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1008 14:23:40.736405       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1008 14:23:41.412400       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1008 14:23:41.423756       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1008 14:23:41.431519       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1008 14:23:46.190246       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1008 14:23:46.194096       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1008 14:23:46.390973       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1008 14:23:46.839639       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1008 14:24:29.836739       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:60732: use of closed network connection
	E1008 14:24:30.001569       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:60754: use of closed network connection
	E1008 14:24:30.207099       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:60770: use of closed network connection
	E1008 14:24:30.374520       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:60784: use of closed network connection
	E1008 14:24:30.535911       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:60810: use of closed network connection
	E1008 14:24:30.697115       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:60834: use of closed network connection
	E1008 14:24:30.974454       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:60862: use of closed network connection
	E1008 14:24:31.133503       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:60876: use of closed network connection
	E1008 14:24:31.290639       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:60900: use of closed network connection
	E1008 14:24:31.447827       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:60924: use of closed network connection
	
	
	==> kube-controller-manager [7bc5378271f6ec3084def02b6c09453b95f33b6c40f004a8ecd7ddaca4ee2e23] <==
	I1008 14:23:45.735304       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1008 14:23:45.736140       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1008 14:23:45.736180       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1008 14:23:45.736201       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1008 14:23:45.736257       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1008 14:23:45.736247       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1008 14:23:45.736431       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1008 14:23:45.736317       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1008 14:23:45.736499       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1008 14:23:45.736731       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1008 14:23:45.740043       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1008 14:23:45.740066       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1008 14:23:45.742483       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1008 14:23:45.745803       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1008 14:23:45.752135       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1008 14:23:45.757502       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1008 14:23:45.762890       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1008 14:24:00.736958       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I1008 14:24:11.255158       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-439307-m02\" does not exist"
	I1008 14:24:11.267044       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-439307-m02" podCIDRs=["10.244.1.0/24"]
	I1008 14:24:15.739150       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-439307-m02"
	I1008 14:24:24.240565       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-439307-m02"
	I1008 14:24:41.773017       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-439307-m02"
	I1008 14:24:41.773446       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-439307-m03\" does not exist"
	I1008 14:24:41.785959       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-439307-m03" podCIDRs=["10.244.2.0/24"]
	
	
	==> kube-proxy [70d5305f9c0f1e614d86457efd99bfbb2a639a470f299474edd5bdee53d17425] <==
	I1008 14:23:46.942146       1 server_linux.go:53] "Using iptables proxy"
	I1008 14:23:47.054382       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1008 14:23:47.154807       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1008 14:23:47.154859       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.67.2"]
	E1008 14:23:47.154951       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1008 14:23:47.180008       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1008 14:23:47.180073       1 server_linux.go:132] "Using iptables Proxier"
	I1008 14:23:47.186411       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1008 14:23:47.187151       1 server.go:527] "Version info" version="v1.34.1"
	I1008 14:23:47.187189       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 14:23:47.189577       1 config.go:200] "Starting service config controller"
	I1008 14:23:47.189598       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1008 14:23:47.189627       1 config.go:106] "Starting endpoint slice config controller"
	I1008 14:23:47.189632       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1008 14:23:47.189645       1 config.go:403] "Starting serviceCIDR config controller"
	I1008 14:23:47.189650       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1008 14:23:47.189881       1 config.go:309] "Starting node config controller"
	I1008 14:23:47.189888       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1008 14:23:47.189894       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1008 14:23:47.290449       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1008 14:23:47.290467       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1008 14:23:47.290469       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [4023d943508d78a5c887a79feaa82148d136b6c293acc44418506ac640d4c238] <==
	E1008 14:23:38.760223       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1008 14:23:38.760331       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1008 14:23:38.760371       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1008 14:23:38.760376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1008 14:23:38.760448       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1008 14:23:38.760458       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1008 14:23:38.760452       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1008 14:23:38.760534       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1008 14:23:38.760564       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1008 14:23:38.760602       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1008 14:23:38.760684       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1008 14:23:38.760689       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1008 14:23:38.760727       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1008 14:23:38.760774       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1008 14:23:38.760787       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1008 14:23:39.582212       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1008 14:23:39.649725       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1008 14:23:39.743595       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1008 14:23:39.755043       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1008 14:23:39.833591       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1008 14:23:39.896154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1008 14:23:39.927234       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1008 14:23:39.948345       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1008 14:23:39.962957       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1008 14:23:41.659308       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 08 14:23:42 multinode-439307 kubelet[1486]: I1008 14:23:42.315174    1486 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-439307" podStartSLOduration=1.3151361129999999 podStartE2EDuration="1.315136113s" podCreationTimestamp="2025-10-08 14:23:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:23:42.304935753 +0000 UTC m=+1.126741170" watchObservedRunningTime="2025-10-08 14:23:42.315136113 +0000 UTC m=+1.136941525"
	Oct 08 14:23:42 multinode-439307 kubelet[1486]: I1008 14:23:42.326049    1486 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-439307" podStartSLOduration=1.3260291149999999 podStartE2EDuration="1.326029115s" podCreationTimestamp="2025-10-08 14:23:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:23:42.315323385 +0000 UTC m=+1.137128872" watchObservedRunningTime="2025-10-08 14:23:42.326029115 +0000 UTC m=+1.147834531"
	Oct 08 14:23:42 multinode-439307 kubelet[1486]: I1008 14:23:42.326174    1486 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-multinode-439307" podStartSLOduration=1.326165456 podStartE2EDuration="1.326165456s" podCreationTimestamp="2025-10-08 14:23:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:23:42.325917199 +0000 UTC m=+1.147722617" watchObservedRunningTime="2025-10-08 14:23:42.326165456 +0000 UTC m=+1.147970871"
	Oct 08 14:23:42 multinode-439307 kubelet[1486]: I1008 14:23:42.352482    1486 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-multinode-439307" podStartSLOduration=1.352459294 podStartE2EDuration="1.352459294s" podCreationTimestamp="2025-10-08 14:23:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:23:42.33893649 +0000 UTC m=+1.160741907" watchObservedRunningTime="2025-10-08 14:23:42.352459294 +0000 UTC m=+1.174264711"
	Oct 08 14:23:45 multinode-439307 kubelet[1486]: I1008 14:23:45.703138    1486 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 08 14:23:45 multinode-439307 kubelet[1486]: I1008 14:23:45.703898    1486 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 08 14:23:46 multinode-439307 kubelet[1486]: I1008 14:23:46.481639    1486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fea0f284-17d4-438c-91a6-14831ce6ce5c-lib-modules\") pod \"kindnet-l6pqj\" (UID: \"fea0f284-17d4-438c-91a6-14831ce6ce5c\") " pod="kube-system/kindnet-l6pqj"
	Oct 08 14:23:46 multinode-439307 kubelet[1486]: I1008 14:23:46.481688    1486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jstr\" (UniqueName: \"kubernetes.io/projected/1211872c-1472-435c-a117-2656ba2fca8e-kube-api-access-6jstr\") pod \"kube-proxy-sjzfx\" (UID: \"1211872c-1472-435c-a117-2656ba2fca8e\") " pod="kube-system/kube-proxy-sjzfx"
	Oct 08 14:23:46 multinode-439307 kubelet[1486]: I1008 14:23:46.481713    1486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/fea0f284-17d4-438c-91a6-14831ce6ce5c-cni-cfg\") pod \"kindnet-l6pqj\" (UID: \"fea0f284-17d4-438c-91a6-14831ce6ce5c\") " pod="kube-system/kindnet-l6pqj"
	Oct 08 14:23:46 multinode-439307 kubelet[1486]: I1008 14:23:46.481727    1486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1211872c-1472-435c-a117-2656ba2fca8e-xtables-lock\") pod \"kube-proxy-sjzfx\" (UID: \"1211872c-1472-435c-a117-2656ba2fca8e\") " pod="kube-system/kube-proxy-sjzfx"
	Oct 08 14:23:46 multinode-439307 kubelet[1486]: I1008 14:23:46.481745    1486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fea0f284-17d4-438c-91a6-14831ce6ce5c-xtables-lock\") pod \"kindnet-l6pqj\" (UID: \"fea0f284-17d4-438c-91a6-14831ce6ce5c\") " pod="kube-system/kindnet-l6pqj"
	Oct 08 14:23:46 multinode-439307 kubelet[1486]: I1008 14:23:46.481763    1486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1211872c-1472-435c-a117-2656ba2fca8e-lib-modules\") pod \"kube-proxy-sjzfx\" (UID: \"1211872c-1472-435c-a117-2656ba2fca8e\") " pod="kube-system/kube-proxy-sjzfx"
	Oct 08 14:23:46 multinode-439307 kubelet[1486]: I1008 14:23:46.481786    1486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb5rk\" (UniqueName: \"kubernetes.io/projected/fea0f284-17d4-438c-91a6-14831ce6ce5c-kube-api-access-nb5rk\") pod \"kindnet-l6pqj\" (UID: \"fea0f284-17d4-438c-91a6-14831ce6ce5c\") " pod="kube-system/kindnet-l6pqj"
	Oct 08 14:23:46 multinode-439307 kubelet[1486]: I1008 14:23:46.481806    1486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1211872c-1472-435c-a117-2656ba2fca8e-kube-proxy\") pod \"kube-proxy-sjzfx\" (UID: \"1211872c-1472-435c-a117-2656ba2fca8e\") " pod="kube-system/kube-proxy-sjzfx"
	Oct 08 14:23:47 multinode-439307 kubelet[1486]: I1008 14:23:47.299755    1486 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sjzfx" podStartSLOduration=1.299713567 podStartE2EDuration="1.299713567s" podCreationTimestamp="2025-10-08 14:23:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:23:47.299676523 +0000 UTC m=+6.121481941" watchObservedRunningTime="2025-10-08 14:23:47.299713567 +0000 UTC m=+6.121518985"
	Oct 08 14:23:48 multinode-439307 kubelet[1486]: I1008 14:23:48.313744    1486 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-l6pqj" podStartSLOduration=2.313719899 podStartE2EDuration="2.313719899s" podCreationTimestamp="2025-10-08 14:23:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:23:48.313549742 +0000 UTC m=+7.135355171" watchObservedRunningTime="2025-10-08 14:23:48.313719899 +0000 UTC m=+7.135525315"
	Oct 08 14:23:57 multinode-439307 kubelet[1486]: I1008 14:23:57.772604    1486 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 08 14:23:57 multinode-439307 kubelet[1486]: I1008 14:23:57.853219    1486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rw6pb\" (UniqueName: \"kubernetes.io/projected/a445b5ef-8d30-4b7c-a40f-77f2a9072e7f-kube-api-access-rw6pb\") pod \"coredns-66bc5c9577-llvkc\" (UID: \"a445b5ef-8d30-4b7c-a40f-77f2a9072e7f\") " pod="kube-system/coredns-66bc5c9577-llvkc"
	Oct 08 14:23:57 multinode-439307 kubelet[1486]: I1008 14:23:57.853273    1486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e1d410c3-de2a-4e2a-88c1-93970ce8b254-tmp\") pod \"storage-provisioner\" (UID: \"e1d410c3-de2a-4e2a-88c1-93970ce8b254\") " pod="kube-system/storage-provisioner"
	Oct 08 14:23:57 multinode-439307 kubelet[1486]: I1008 14:23:57.853308    1486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlb24\" (UniqueName: \"kubernetes.io/projected/e1d410c3-de2a-4e2a-88c1-93970ce8b254-kube-api-access-nlb24\") pod \"storage-provisioner\" (UID: \"e1d410c3-de2a-4e2a-88c1-93970ce8b254\") " pod="kube-system/storage-provisioner"
	Oct 08 14:23:57 multinode-439307 kubelet[1486]: I1008 14:23:57.853418    1486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a445b5ef-8d30-4b7c-a40f-77f2a9072e7f-config-volume\") pod \"coredns-66bc5c9577-llvkc\" (UID: \"a445b5ef-8d30-4b7c-a40f-77f2a9072e7f\") " pod="kube-system/coredns-66bc5c9577-llvkc"
	Oct 08 14:23:58 multinode-439307 kubelet[1486]: I1008 14:23:58.331131    1486 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=11.331114913 podStartE2EDuration="11.331114913s" podCreationTimestamp="2025-10-08 14:23:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:23:58.330824691 +0000 UTC m=+17.152630110" watchObservedRunningTime="2025-10-08 14:23:58.331114913 +0000 UTC m=+17.152920351"
	Oct 08 14:23:58 multinode-439307 kubelet[1486]: I1008 14:23:58.344349    1486 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-llvkc" podStartSLOduration=12.344324469 podStartE2EDuration="12.344324469s" podCreationTimestamp="2025-10-08 14:23:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:23:58.34404488 +0000 UTC m=+17.165850298" watchObservedRunningTime="2025-10-08 14:23:58.344324469 +0000 UTC m=+17.166129896"
	Oct 08 14:24:27 multinode-439307 kubelet[1486]: I1008 14:24:27.247091    1486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g9nr\" (UniqueName: \"kubernetes.io/projected/48d40e87-f7eb-4886-84ea-0d1c344bcef4-kube-api-access-9g9nr\") pod \"busybox-7b57f96db7-n6rvn\" (UID: \"48d40e87-f7eb-4886-84ea-0d1c344bcef4\") " pod="default/busybox-7b57f96db7-n6rvn"
	Oct 08 14:24:29 multinode-439307 kubelet[1486]: I1008 14:24:29.399108    1486 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-7b57f96db7-n6rvn" podStartSLOduration=1.132795141 podStartE2EDuration="2.399085602s" podCreationTimestamp="2025-10-08 14:24:27 +0000 UTC" firstStartedPulling="2025-10-08 14:24:27.526216854 +0000 UTC m=+46.348022263" lastFinishedPulling="2025-10-08 14:24:28.792507312 +0000 UTC m=+47.614312724" observedRunningTime="2025-10-08 14:24:29.398743884 +0000 UTC m=+48.220549303" watchObservedRunningTime="2025-10-08 14:24:29.399085602 +0000 UTC m=+48.220891019"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-439307 -n multinode-439307
helpers_test.go:269: (dbg) Run:  kubectl --context multinode-439307 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiNode/serial/MultiNodeLabels FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/MultiNodeLabels (1.88s)

                                                
                                    

Test pass (305/332)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 4.03
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 4.36
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.07
18 TestDownloadOnly/v1.34.1/DeleteAll 0.22
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
20 TestDownloadOnlyKic 0.4
21 TestBinaryMirror 0.82
22 TestOffline 55.18
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 164.92
29 TestAddons/serial/Volcano 39.28
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 8.47
35 TestAddons/parallel/Registry 14.18
36 TestAddons/parallel/RegistryCreds 0.72
37 TestAddons/parallel/Ingress 20.02
38 TestAddons/parallel/InspektorGadget 5.29
39 TestAddons/parallel/MetricsServer 5.7
41 TestAddons/parallel/CSI 33
42 TestAddons/parallel/Headlamp 17.56
43 TestAddons/parallel/CloudSpanner 5.52
44 TestAddons/parallel/LocalPath 50.79
45 TestAddons/parallel/NvidiaDevicePlugin 6.04
46 TestAddons/parallel/Yakd 11.76
47 TestAddons/parallel/AmdGpuDevicePlugin 6.04
48 TestAddons/StoppedEnableDisable 12.77
49 TestCertOptions 28.77
50 TestCertExpiration 213.7
52 TestForceSystemdFlag 23.5
53 TestForceSystemdEnv 39.68
54 TestDockerEnvContainerd 35.74
55 TestKVMDriverInstallOrUpdate 0.76
59 TestErrorSpam/setup 20.72
60 TestErrorSpam/start 0.66
61 TestErrorSpam/status 0.97
62 TestErrorSpam/pause 1.51
63 TestErrorSpam/unpause 1.6
64 TestErrorSpam/stop 1.43
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 39.3
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 6.16
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.06
75 TestFunctional/serial/CacheCmd/cache/add_remote 2.52
76 TestFunctional/serial/CacheCmd/cache/add_local 0.88
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.59
81 TestFunctional/serial/CacheCmd/cache/delete 0.11
82 TestFunctional/serial/MinikubeKubectlCmd 0.12
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
84 TestFunctional/serial/ExtraConfig 41.5
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.25
87 TestFunctional/serial/LogsFileCmd 1.28
88 TestFunctional/serial/InvalidService 3.8
90 TestFunctional/parallel/ConfigCmd 0.4
91 TestFunctional/parallel/DashboardCmd 9.82
92 TestFunctional/parallel/DryRun 0.44
93 TestFunctional/parallel/InternationalLanguage 0.18
94 TestFunctional/parallel/StatusCmd 1.13
98 TestFunctional/parallel/ServiceCmdConnect 13.81
99 TestFunctional/parallel/AddonsCmd 0.27
100 TestFunctional/parallel/PersistentVolumeClaim 32.43
102 TestFunctional/parallel/SSHCmd 0.57
103 TestFunctional/parallel/CpCmd 1.93
104 TestFunctional/parallel/MySQL 18.36
105 TestFunctional/parallel/FileSync 0.34
106 TestFunctional/parallel/CertSync 1.87
110 TestFunctional/parallel/NodeLabels 0.07
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.58
114 TestFunctional/parallel/License 0.28
115 TestFunctional/parallel/ServiceCmd/DeployApp 8.2
116 TestFunctional/parallel/ProfileCmd/profile_not_create 0.48
117 TestFunctional/parallel/ProfileCmd/profile_list 0.46
118 TestFunctional/parallel/MountCmd/any-port 7.22
119 TestFunctional/parallel/ProfileCmd/profile_json_output 0.46
120 TestFunctional/parallel/Version/short 0.06
121 TestFunctional/parallel/Version/components 0.52
122 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
123 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
124 TestFunctional/parallel/ImageCommands/ImageListJson 0.36
125 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
126 TestFunctional/parallel/ImageCommands/ImageBuild 3.5
127 TestFunctional/parallel/ImageCommands/Setup 0.43
128 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.19
129 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.01
130 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.21
131 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.35
132 TestFunctional/parallel/ImageCommands/ImageRemove 0.45
133 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.59
134 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.41
135 TestFunctional/parallel/ServiceCmd/List 0.42
137 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.55
138 TestFunctional/parallel/MountCmd/specific-port 1.87
139 TestFunctional/parallel/ServiceCmd/JSONOutput 0.38
140 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
142 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.23
143 TestFunctional/parallel/ServiceCmd/HTTPS 0.41
144 TestFunctional/parallel/ServiceCmd/Format 0.4
145 TestFunctional/parallel/ServiceCmd/URL 0.39
146 TestFunctional/parallel/MountCmd/VerifyCleanup 1.7
147 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
148 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
152 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
153 TestFunctional/parallel/UpdateContextCmd/no_changes 0.17
154 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
155 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.16
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 113.64
164 TestMultiControlPlane/serial/DeployApp 5.19
165 TestMultiControlPlane/serial/PingHostFromPods 1.12
166 TestMultiControlPlane/serial/AddWorkerNode 23.91
167 TestMultiControlPlane/serial/NodeLabels 0.07
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.94
169 TestMultiControlPlane/serial/CopyFile 17.49
170 TestMultiControlPlane/serial/StopSecondaryNode 12.68
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.74
172 TestMultiControlPlane/serial/RestartSecondaryNode 9.29
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.96
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 98.32
175 TestMultiControlPlane/serial/DeleteSecondaryNode 9.26
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.72
177 TestMultiControlPlane/serial/StopCluster 35.96
178 TestMultiControlPlane/serial/RestartCluster 54.39
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.73
180 TestMultiControlPlane/serial/AddSecondaryNode 44.89
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.94
185 TestJSONOutput/start/Command 40.6
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 0.78
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 0.64
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.79
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.22
210 TestKicCustomNetwork/create_custom_network 29.18
211 TestKicCustomNetwork/use_default_bridge_network 24.55
212 TestKicExistingNetwork 25.36
213 TestKicCustomSubnet 25.1
214 TestKicStaticIP 27.94
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 46.77
219 TestMountStart/serial/StartWithMountFirst 5.17
220 TestMountStart/serial/VerifyMountFirst 0.27
221 TestMountStart/serial/StartWithMountSecond 5.33
222 TestMountStart/serial/VerifyMountSecond 0.27
223 TestMountStart/serial/DeleteFirst 1.67
224 TestMountStart/serial/VerifyMountPostDelete 0.28
225 TestMountStart/serial/Stop 1.19
226 TestMountStart/serial/RestartStopped 7.05
227 TestMountStart/serial/VerifyMountPostStop 0.27
230 TestMultiNode/serial/FreshStart2Nodes 69.05
231 TestMultiNode/serial/DeployApp2Nodes 3.75
232 TestMultiNode/serial/PingHostFrom2Pods 0.75
235 TestMultiNode/serial/ProfileList 0.7
236 TestMultiNode/serial/CopyFile 10.16
237 TestMultiNode/serial/StopNode 2.28
238 TestMultiNode/serial/StartAfterStop 7.31
239 TestMultiNode/serial/RestartKeepsNodes 70.79
240 TestMultiNode/serial/DeleteNode 5.28
241 TestMultiNode/serial/StopMultiNode 23.92
242 TestMultiNode/serial/RestartMultiNode 47.41
243 TestMultiNode/serial/ValidateNameConflict 24.67
248 TestPreload 103.32
250 TestScheduledStopUnix 96.86
253 TestInsufficientStorage 9.49
254 TestRunningBinaryUpgrade 48.67
256 TestKubernetesUpgrade 325.26
257 TestMissingContainerUpgrade 121.85
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
263 TestNoKubernetes/serial/StartWithK8s 32.62
268 TestNetworkPlugins/group/false 8.27
272 TestNoKubernetes/serial/StartWithStopK8s 14.98
273 TestNoKubernetes/serial/Start 5.21
274 TestNoKubernetes/serial/VerifyK8sNotRunning 0.33
275 TestNoKubernetes/serial/ProfileList 7.28
276 TestStoppedBinaryUpgrade/Setup 0.52
277 TestStoppedBinaryUpgrade/Upgrade 60.06
278 TestNoKubernetes/serial/Stop 2.57
279 TestNoKubernetes/serial/StartNoArgs 6.5
280 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.29
281 TestStoppedBinaryUpgrade/MinikubeLogs 1.23
290 TestPause/serial/Start 40.34
291 TestPause/serial/SecondStartNoReconfiguration 6.39
292 TestPause/serial/Pause 0.9
293 TestPause/serial/VerifyStatus 0.32
294 TestPause/serial/Unpause 1.34
295 TestPause/serial/PauseAgain 0.84
296 TestPause/serial/DeletePaused 2.69
297 TestPause/serial/VerifyDeletedResources 14.84
298 TestNetworkPlugins/group/auto/Start 44.07
299 TestNetworkPlugins/group/kindnet/Start 41.01
300 TestNetworkPlugins/group/auto/KubeletFlags 0.29
301 TestNetworkPlugins/group/auto/NetCatPod 8.19
302 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
303 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
304 TestNetworkPlugins/group/kindnet/NetCatPod 8.2
305 TestNetworkPlugins/group/auto/DNS 0.15
306 TestNetworkPlugins/group/auto/Localhost 0.11
307 TestNetworkPlugins/group/auto/HairPin 0.13
308 TestNetworkPlugins/group/kindnet/DNS 0.13
309 TestNetworkPlugins/group/kindnet/Localhost 0.12
310 TestNetworkPlugins/group/kindnet/HairPin 0.13
311 TestNetworkPlugins/group/calico/Start 52.83
312 TestNetworkPlugins/group/custom-flannel/Start 47.48
313 TestNetworkPlugins/group/calico/ControllerPod 6.01
314 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.29
315 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.18
316 TestNetworkPlugins/group/calico/KubeletFlags 0.32
317 TestNetworkPlugins/group/calico/NetCatPod 8.18
318 TestNetworkPlugins/group/custom-flannel/DNS 0.14
319 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
320 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
321 TestNetworkPlugins/group/calico/DNS 0.13
322 TestNetworkPlugins/group/calico/Localhost 0.11
323 TestNetworkPlugins/group/calico/HairPin 0.11
324 TestNetworkPlugins/group/enable-default-cni/Start 81.52
325 TestNetworkPlugins/group/flannel/Start 64.61
326 TestNetworkPlugins/group/bridge/Start 76.46
328 TestStartStop/group/old-k8s-version/serial/FirstStart 52.53
329 TestNetworkPlugins/group/flannel/ControllerPod 6.01
330 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
331 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.19
332 TestStartStop/group/old-k8s-version/serial/DeployApp 9.29
333 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
334 TestNetworkPlugins/group/flannel/NetCatPod 8.19
335 TestNetworkPlugins/group/enable-default-cni/DNS 0.13
336 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
337 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
338 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.98
339 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
340 TestNetworkPlugins/group/bridge/NetCatPod 8.21
341 TestStartStop/group/old-k8s-version/serial/Stop 12.01
342 TestNetworkPlugins/group/flannel/DNS 0.16
343 TestNetworkPlugins/group/flannel/Localhost 0.13
344 TestNetworkPlugins/group/flannel/HairPin 0.14
345 TestNetworkPlugins/group/bridge/DNS 0.15
346 TestNetworkPlugins/group/bridge/Localhost 0.14
347 TestNetworkPlugins/group/bridge/HairPin 0.14
348 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
349 TestStartStop/group/old-k8s-version/serial/SecondStart 51.13
351 TestStartStop/group/embed-certs/serial/FirstStart 45.34
353 TestStartStop/group/no-preload/serial/FirstStart 52.61
355 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 47.58
356 TestStartStop/group/embed-certs/serial/DeployApp 8.28
357 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
358 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
359 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.84
360 TestStartStop/group/embed-certs/serial/Stop 13.96
361 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
362 TestStartStop/group/old-k8s-version/serial/Pause 2.94
363 TestStartStop/group/no-preload/serial/DeployApp 8.29
364 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.32
366 TestStartStop/group/newest-cni/serial/FirstStart 27.62
367 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.25
368 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.15
369 TestStartStop/group/embed-certs/serial/SecondStart 49.09
370 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.12
371 TestStartStop/group/no-preload/serial/Stop 12.04
372 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.01
373 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.24
374 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.27
375 TestStartStop/group/no-preload/serial/SecondStart 47.44
376 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 53.12
377 TestStartStop/group/newest-cni/serial/DeployApp 0
378 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.65
379 TestStartStop/group/newest-cni/serial/Stop 1.3
380 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.26
381 TestStartStop/group/newest-cni/serial/SecondStart 11.78
382 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
383 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
384 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.31
385 TestStartStop/group/newest-cni/serial/Pause 3.34
386 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
387 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
388 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
389 TestStartStop/group/embed-certs/serial/Pause 2.83
390 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
391 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
392 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
393 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
394 TestStartStop/group/no-preload/serial/Pause 2.82
395 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
396 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
397 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.76
x
+
TestDownloadOnly/v1.28.0/json-events (4.03s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-267137 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-267137 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (4.02547304s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (4.03s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1008 14:02:58.433307  516787 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1008 14:02:58.433427  516787 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-513010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-267137
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-267137: exit status 85 (65.74925ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-267137 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-267137 │ jenkins │ v1.37.0 │ 08 Oct 25 14:02 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 14:02:54
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 14:02:54.452658  516799 out.go:360] Setting OutFile to fd 1 ...
	I1008 14:02:54.452795  516799 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:02:54.452806  516799 out.go:374] Setting ErrFile to fd 2...
	I1008 14:02:54.452809  516799 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:02:54.453023  516799 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-513010/.minikube/bin
	W1008 14:02:54.453144  516799 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21681-513010/.minikube/config/config.json: open /home/jenkins/minikube-integration/21681-513010/.minikube/config/config.json: no such file or directory
	I1008 14:02:54.453609  516799 out.go:368] Setting JSON to true
	I1008 14:02:54.454592  516799 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":6323,"bootTime":1759925851,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 14:02:54.454700  516799 start.go:141] virtualization: kvm guest
	I1008 14:02:54.456869  516799 out.go:99] [download-only-267137] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1008 14:02:54.457114  516799 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21681-513010/.minikube/cache/preloaded-tarball: no such file or directory
	I1008 14:02:54.457140  516799 notify.go:220] Checking for updates...
	I1008 14:02:54.458345  516799 out.go:171] MINIKUBE_LOCATION=21681
	I1008 14:02:54.459607  516799 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 14:02:54.460864  516799 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21681-513010/kubeconfig
	I1008 14:02:54.462175  516799 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-513010/.minikube
	I1008 14:02:54.463385  516799 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1008 14:02:54.465474  516799 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1008 14:02:54.465709  516799 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 14:02:54.490033  516799 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1008 14:02:54.490111  516799 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 14:02:54.549700  516799 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:31 OomKillDisable:false NGoroutines:66 SystemTime:2025-10-08 14:02:54.539613538 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 14:02:54.549824  516799 docker.go:318] overlay module found
	I1008 14:02:54.551290  516799 out.go:99] Using the docker driver based on user configuration
	I1008 14:02:54.551334  516799 start.go:305] selected driver: docker
	I1008 14:02:54.551344  516799 start.go:925] validating driver "docker" against <nil>
	I1008 14:02:54.551444  516799 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 14:02:54.604759  516799 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:31 OomKillDisable:false NGoroutines:66 SystemTime:2025-10-08 14:02:54.594717399 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 14:02:54.604965  516799 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1008 14:02:54.605542  516799 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1008 14:02:54.605731  516799 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1008 14:02:54.607375  516799 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-267137 host does not exist
	  To start a cluster, run: "minikube start -p download-only-267137"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-267137
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (4.36s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-258321 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-258321 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (4.364102136s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (4.36s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1008 14:03:03.222880  516787 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
I1008 14:03:03.222929  516787 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-513010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-258321
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-258321: exit status 85 (66.012988ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-267137 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-267137 │ jenkins │ v1.37.0 │ 08 Oct 25 14:02 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 08 Oct 25 14:02 UTC │ 08 Oct 25 14:02 UTC │
	│ delete  │ -p download-only-267137                                                                                                                                                               │ download-only-267137 │ jenkins │ v1.37.0 │ 08 Oct 25 14:02 UTC │ 08 Oct 25 14:02 UTC │
	│ start   │ -o=json --download-only -p download-only-258321 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-258321 │ jenkins │ v1.37.0 │ 08 Oct 25 14:02 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 14:02:58
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 14:02:58.902418  517156 out.go:360] Setting OutFile to fd 1 ...
	I1008 14:02:58.902725  517156 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:02:58.902737  517156 out.go:374] Setting ErrFile to fd 2...
	I1008 14:02:58.902743  517156 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:02:58.902997  517156 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-513010/.minikube/bin
	I1008 14:02:58.903505  517156 out.go:368] Setting JSON to true
	I1008 14:02:58.904464  517156 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":6328,"bootTime":1759925851,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 14:02:58.904572  517156 start.go:141] virtualization: kvm guest
	I1008 14:02:58.906415  517156 out.go:99] [download-only-258321] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1008 14:02:58.906604  517156 notify.go:220] Checking for updates...
	I1008 14:02:58.907834  517156 out.go:171] MINIKUBE_LOCATION=21681
	I1008 14:02:58.909406  517156 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 14:02:58.910798  517156 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21681-513010/kubeconfig
	I1008 14:02:58.912139  517156 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-513010/.minikube
	I1008 14:02:58.913446  517156 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1008 14:02:58.915805  517156 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1008 14:02:58.916051  517156 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 14:02:58.939898  517156 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1008 14:02:58.939966  517156 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 14:02:58.999747  517156 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:31 OomKillDisable:false NGoroutines:54 SystemTime:2025-10-08 14:02:58.988264384 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 14:02:58.999852  517156 docker.go:318] overlay module found
	I1008 14:02:59.001587  517156 out.go:99] Using the docker driver based on user configuration
	I1008 14:02:59.001623  517156 start.go:305] selected driver: docker
	I1008 14:02:59.001631  517156 start.go:925] validating driver "docker" against <nil>
	I1008 14:02:59.001755  517156 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 14:02:59.056457  517156 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:31 OomKillDisable:false NGoroutines:54 SystemTime:2025-10-08 14:02:59.047508599 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 14:02:59.056700  517156 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1008 14:02:59.057280  517156 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1008 14:02:59.057440  517156 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1008 14:02:59.059224  517156 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-258321 host does not exist
	  To start a cluster, run: "minikube start -p download-only-258321"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-258321
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.4s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-055428 --alsologtostderr --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "download-docker-055428" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-055428
--- PASS: TestDownloadOnlyKic (0.40s)

                                                
                                    
x
+
TestBinaryMirror (0.82s)

                                                
                                                
=== RUN   TestBinaryMirror
I1008 14:03:04.315563  516787 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-060138 --alsologtostderr --binary-mirror http://127.0.0.1:42119 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-060138" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-060138
--- PASS: TestBinaryMirror (0.82s)

                                                
                                    
x
+
TestOffline (55.18s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-925961 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-925961 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (52.669339667s)
helpers_test.go:175: Cleaning up "offline-containerd-925961" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-925961
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-925961: (2.507114715s)
--- PASS: TestOffline (55.18s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-447971
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-447971: exit status 85 (57.173108ms)

                                                
                                                
-- stdout --
	* Profile "addons-447971" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-447971"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-447971
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-447971: exit status 85 (57.416054ms)

                                                
                                                
-- stdout --
	* Profile "addons-447971" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-447971"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (164.92s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-447971 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-447971 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m44.915256695s)
--- PASS: TestAddons/Setup (164.92s)

                                                
                                    
x
+
TestAddons/serial/Volcano (39.28s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:868: volcano-scheduler stabilized in 16.610779ms
addons_test.go:876: volcano-admission stabilized in 16.688149ms
addons_test.go:884: volcano-controller stabilized in 16.65743ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-76c996c8bf-v2s6q" [c93fb3b4-099e-4ef3-8869-1b31bbeae3d5] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.003572094s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-6c447bd768-7z5zw" [f568e03f-0fb6-40ea-a614-fafddb8aadd7] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004251865s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-6fd4f85cb8-fh7z6" [cc8da3f0-df53-4354-9e8b-ac1367195a42] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004450661s
addons_test.go:903: (dbg) Run:  kubectl --context addons-447971 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-447971 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-447971 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [7d919804-a54b-4099-9104-9c51f6769e0b] Pending
helpers_test.go:352: "test-job-nginx-0" [7d919804-a54b-4099-9104-9c51f6769e0b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [7d919804-a54b-4099-9104-9c51f6769e0b] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.003223772s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-447971 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-447971 addons disable volcano --alsologtostderr -v=1: (11.93871727s)
--- PASS: TestAddons/serial/Volcano (39.28s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-447971 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-447971 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.47s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-447971 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-447971 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [348aa297-5f5c-4403-b7c6-68cdc17b22b5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [348aa297-5f5c-4403-b7c6-68cdc17b22b5] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.004086997s
addons_test.go:694: (dbg) Run:  kubectl --context addons-447971 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-447971 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-447971 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.47s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.18s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 4.176225ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-j4dql" [8c677fa6-f544-48a0-9cf4-6c4e9c0ce25b] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003004144s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-kwvv5" [534f2168-2b77-4817-bb91-b7ba362a1998] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003766762s
addons_test.go:392: (dbg) Run:  kubectl --context addons-447971 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-447971 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-447971 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.347959458s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-447971 ip
2025/10/08 14:07:00 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-447971 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.18s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.72s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.35348ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-447971
addons_test.go:332: (dbg) Run:  kubectl --context addons-447971 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-447971 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.72s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-447971 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-447971 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-447971 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [1bef4557-dfe9-43ed-9918-ebc2217222d4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [1bef4557-dfe9-43ed-9918-ebc2217222d4] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003447313s
I1008 14:07:10.613875  516787 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-447971 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-447971 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-447971 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-447971 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-447971 addons disable ingress-dns --alsologtostderr -v=1: (1.839114515s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-447971 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-447971 addons disable ingress --alsologtostderr -v=1: (7.76495647s)
--- PASS: TestAddons/parallel/Ingress (20.02s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.29s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-wf7fw" [7850c236-3ee2-4a5e-a1cc-7c3c5a8af7a3] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.007126558s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-447971 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (5.29s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.7s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
I1008 14:06:53.003177  516787 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1008 14:06:53.009923  516787 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
addons_test.go:455: metrics-server stabilized in 5.555976ms
I1008 14:06:53.010131  516787 kapi.go:107] duration metric: took 6.970983ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-pv6pt" [e4eab473-e7ff-4ae7-9fb2-8980298aee01] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003745085s
addons_test.go:463: (dbg) Run:  kubectl --context addons-447971 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-447971 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.70s)

                                                
                                    
x
+
TestAddons/parallel/CSI (33s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:549: csi-hostpath-driver pods stabilized in 7.010901ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-447971 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-447971 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-447971 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-447971 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-447971 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-447971 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-447971 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-447971 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [961c1391-77fb-4900-adeb-a2b9b125d724] Pending
helpers_test.go:352: "task-pv-pod" [961c1391-77fb-4900-adeb-a2b9b125d724] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 6.009787338s
addons_test.go:572: (dbg) Run:  kubectl --context addons-447971 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-447971 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-447971 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-447971 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-447971 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-447971 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-447971 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-447971 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-447971 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-447971 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-447971 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [538a0ac7-6d89-4993-92a2-1336448b2d69] Pending
helpers_test.go:352: "task-pv-pod-restore" [538a0ac7-6d89-4993-92a2-1336448b2d69] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [538a0ac7-6d89-4993-92a2-1336448b2d69] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.002904619s
addons_test.go:614: (dbg) Run:  kubectl --context addons-447971 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-447971 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-447971 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-447971 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-447971 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-447971 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.576336057s)
--- PASS: TestAddons/parallel/CSI (33.00s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.56s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-447971 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6945c6f4d-md92d" [5a319173-ea8a-4e32-8668-fb4331bf5c4d] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-md92d" [5a319173-ea8a-4e32-8668-fb4331bf5c4d] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003578268s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-447971 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-447971 addons disable headlamp --alsologtostderr -v=1: (5.782575391s)
--- PASS: TestAddons/parallel/Headlamp (17.56s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.52s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-p4n4j" [d34a54c4-9a1e-42ff-ae68-9d24b56b264c] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.00426126s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-447971 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.52s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (50.79s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-447971 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-447971 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-447971 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-447971 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-447971 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-447971 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-447971 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [026c2a73-ba31-4d9b-a2e9-76fe5892d053] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [026c2a73-ba31-4d9b-a2e9-76fe5892d053] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [026c2a73-ba31-4d9b-a2e9-76fe5892d053] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.005126065s
addons_test.go:967: (dbg) Run:  kubectl --context addons-447971 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-447971 ssh "cat /opt/local-path-provisioner/pvc-17e8fb08-70a7-4296-ae26-0210cc5a128e_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-447971 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-447971 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-447971 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-447971 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.783409806s)
--- PASS: TestAddons/parallel/LocalPath (50.79s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.04s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-d2hpg" [f8110b8b-c1e9-4033-a4cb-e50d0bbfbb84] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003940557s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-447971 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-447971 addons disable nvidia-device-plugin --alsologtostderr -v=1: (1.037879595s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.04s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.76s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-hzfsp" [23e6fb2c-ae0c-4f42-af4d-18f5c13e0d9a] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003604406s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-447971 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-447971 addons disable yakd --alsologtostderr -v=1: (5.755270089s)
--- PASS: TestAddons/parallel/Yakd (11.76s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.04s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-rw8kz" [1a12eb5b-8fef-469b-b455-307de3dea1b8] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.003858481s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-447971 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-447971 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: (1.0367679s)
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (6.04s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.77s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-447971
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-447971: (12.504834736s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-447971
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-447971
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-447971
--- PASS: TestAddons/StoppedEnableDisable (12.77s)

                                                
                                    
x
+
TestCertOptions (28.77s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-786178 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-786178 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (24.339907101s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-786178 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-786178 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-786178 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-786178" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-786178
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-786178: (3.705829102s)
--- PASS: TestCertOptions (28.77s)

                                                
                                    
x
+
TestCertExpiration (213.7s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-793585 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-793585 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (25.577644855s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-793585 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-793585 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (5.581780903s)
helpers_test.go:175: Cleaning up "cert-expiration-793585" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-793585
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-793585: (2.537452518s)
--- PASS: TestCertExpiration (213.70s)

                                                
                                    
x
+
TestForceSystemdFlag (23.5s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-531872 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-531872 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (21.190645045s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-531872 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-531872" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-531872
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-531872: (2.003568988s)
--- PASS: TestForceSystemdFlag (23.50s)

                                                
                                    
x
+
TestForceSystemdEnv (39.68s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-071674 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-071674 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (31.363817272s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-071674 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-071674" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-071674
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-071674: (7.908319109s)
--- PASS: TestForceSystemdEnv (39.68s)

                                                
                                    
x
+
TestDockerEnvContainerd (35.74s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux amd64
docker_test.go:181: (dbg) Run:  out/minikube-linux-amd64 start -p dockerenv-247541 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-amd64 start -p dockerenv-247541 --driver=docker  --container-runtime=containerd: (20.308682929s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-amd64 docker-env --ssh-host --ssh-add -p dockerenv-247541"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-amd64 docker-env --ssh-host --ssh-add -p dockerenv-247541": (1.039771528s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXuj2IIV/agent.542794" SSH_AGENT_PID="542795" DOCKER_HOST=ssh://docker@127.0.0.1:33171 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXuj2IIV/agent.542794" SSH_AGENT_PID="542795" DOCKER_HOST=ssh://docker@127.0.0.1:33171 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXuj2IIV/agent.542794" SSH_AGENT_PID="542795" DOCKER_HOST=ssh://docker@127.0.0.1:33171 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.002306318s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXuj2IIV/agent.542794" SSH_AGENT_PID="542795" DOCKER_HOST=ssh://docker@127.0.0.1:33171 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-247541" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p dockerenv-247541
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p dockerenv-247541: (2.382842161s)
--- PASS: TestDockerEnvContainerd (35.74s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0.76s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1008 14:34:16.437601  516787 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1008 14:34:16.437786  516787 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate3183143468/001:/home/jenkins/workspace/Docker_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1008 14:34:16.469555  516787 install.go:163] /tmp/TestKVMDriverInstallOrUpdate3183143468/001/docker-machine-driver-kvm2 version is 1.1.1
W1008 14:34:16.469590  516787 install.go:76] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.37.0
W1008 14:34:16.469736  516787 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1008 14:34:16.469767  516787 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3183143468/001/docker-machine-driver-kvm2
I1008 14:34:17.046540  516787 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate3183143468/001:/home/jenkins/workspace/Docker_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1008 14:34:17.062248  516787 install.go:163] /tmp/TestKVMDriverInstallOrUpdate3183143468/001/docker-machine-driver-kvm2 version is 1.37.0
--- PASS: TestKVMDriverInstallOrUpdate (0.76s)

                                                
                                    
x
+
TestErrorSpam/setup (20.72s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-636290 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-636290 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-636290 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-636290 --driver=docker  --container-runtime=containerd: (20.719491907s)
--- PASS: TestErrorSpam/setup (20.72s)

                                                
                                    
x
+
TestErrorSpam/start (0.66s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-636290 --log_dir /tmp/nospam-636290 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-636290 --log_dir /tmp/nospam-636290 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-636290 --log_dir /tmp/nospam-636290 start --dry-run
--- PASS: TestErrorSpam/start (0.66s)

                                                
                                    
x
+
TestErrorSpam/status (0.97s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-636290 --log_dir /tmp/nospam-636290 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-636290 --log_dir /tmp/nospam-636290 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-636290 --log_dir /tmp/nospam-636290 status
--- PASS: TestErrorSpam/status (0.97s)

                                                
                                    
x
+
TestErrorSpam/pause (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-636290 --log_dir /tmp/nospam-636290 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-636290 --log_dir /tmp/nospam-636290 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-636290 --log_dir /tmp/nospam-636290 pause
--- PASS: TestErrorSpam/pause (1.51s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.6s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-636290 --log_dir /tmp/nospam-636290 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-636290 --log_dir /tmp/nospam-636290 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-636290 --log_dir /tmp/nospam-636290 unpause
--- PASS: TestErrorSpam/unpause (1.60s)

                                                
                                    
x
+
TestErrorSpam/stop (1.43s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-636290 --log_dir /tmp/nospam-636290 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-636290 --log_dir /tmp/nospam-636290 stop: (1.240453905s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-636290 --log_dir /tmp/nospam-636290 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-636290 --log_dir /tmp/nospam-636290 stop
--- PASS: TestErrorSpam/stop (1.43s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21681-513010/.minikube/files/etc/test/nested/copy/516787/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (39.3s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-686950 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-686950 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (39.299014497s)
--- PASS: TestFunctional/serial/StartWithProxy (39.30s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.16s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1008 14:10:01.543784  516787 config.go:182] Loaded profile config "functional-686950": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-686950 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-686950 --alsologtostderr -v=8: (6.158775213s)
functional_test.go:678: soft start took 6.162240459s for "functional-686950" cluster.
I1008 14:10:07.705677  516787 config.go:182] Loaded profile config "functional-686950": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (6.16s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-686950 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.52s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.52s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.88s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-686950 /tmp/TestFunctionalserialCacheCmdcacheadd_local913203859/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 cache add minikube-local-cache-test:functional-686950
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 cache delete minikube-local-cache-test:functional-686950
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-686950
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.88s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.59s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-686950 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (291.541067ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.59s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 kubectl -- --context functional-686950 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-686950 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.5s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-686950 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1008 14:10:50.115462  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/addons-447971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:10:50.121916  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/addons-447971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:10:50.133307  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/addons-447971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:10:50.154768  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/addons-447971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:10:50.196235  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/addons-447971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:10:50.277787  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/addons-447971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:10:50.439362  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/addons-447971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:10:50.761098  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/addons-447971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:10:51.403159  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/addons-447971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:10:52.684726  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/addons-447971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-686950 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.500027342s)
functional_test.go:776: restart took 41.500159138s for "functional-686950" cluster.
I1008 14:10:55.061557  516787 config.go:182] Loaded profile config "functional-686950": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (41.50s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-686950 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 logs
E1008 14:10:55.246714  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/addons-447971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-686950 logs: (1.251002655s)
--- PASS: TestFunctional/serial/LogsCmd (1.25s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.28s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 logs --file /tmp/TestFunctionalserialLogsFileCmd2395654966/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-686950 logs --file /tmp/TestFunctionalserialLogsFileCmd2395654966/001/logs.txt: (1.278842501s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.28s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.8s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-686950 apply -f testdata/invalidsvc.yaml
E1008 14:11:00.368601  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/addons-447971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-686950
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-686950: exit status 115 (347.630103ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30390 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-686950 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.80s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-686950 config get cpus: exit status 14 (74.607887ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-686950 config get cpus: exit status 14 (58.973339ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-686950 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-686950 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 564600: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.82s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-686950 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-686950 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (187.43849ms)

                                                
                                                
-- stdout --
	* [functional-686950] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21681
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21681-513010/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-513010/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 14:11:04.108006  560059 out.go:360] Setting OutFile to fd 1 ...
	I1008 14:11:04.108394  560059 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:11:04.108410  560059 out.go:374] Setting ErrFile to fd 2...
	I1008 14:11:04.108417  560059 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:11:04.108727  560059 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-513010/.minikube/bin
	I1008 14:11:04.109423  560059 out.go:368] Setting JSON to false
	I1008 14:11:04.111081  560059 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":6813,"bootTime":1759925851,"procs":231,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 14:11:04.111254  560059 start.go:141] virtualization: kvm guest
	I1008 14:11:04.116062  560059 out.go:179] * [functional-686950] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1008 14:11:04.117676  560059 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 14:11:04.117683  560059 notify.go:220] Checking for updates...
	I1008 14:11:04.120720  560059 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 14:11:04.121891  560059 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-513010/kubeconfig
	I1008 14:11:04.123200  560059 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-513010/.minikube
	I1008 14:11:04.124454  560059 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 14:11:04.125827  560059 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 14:11:04.127527  560059 config.go:182] Loaded profile config "functional-686950": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1008 14:11:04.128155  560059 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 14:11:04.155004  560059 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1008 14:11:04.155171  560059 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 14:11:04.220542  560059 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:57 SystemTime:2025-10-08 14:11:04.209018718 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 14:11:04.220710  560059 docker.go:318] overlay module found
	I1008 14:11:04.222692  560059 out.go:179] * Using the docker driver based on existing profile
	I1008 14:11:04.224031  560059 start.go:305] selected driver: docker
	I1008 14:11:04.224051  560059 start.go:925] validating driver "docker" against &{Name:functional-686950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-686950 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 14:11:04.224173  560059 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 14:11:04.226078  560059 out.go:203] 
	W1008 14:11:04.227248  560059 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1008 14:11:04.228410  560059 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-686950 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-686950 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-686950 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (177.771583ms)

                                                
                                                
-- stdout --
	* [functional-686950] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21681
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21681-513010/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-513010/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 14:11:04.544341  560444 out.go:360] Setting OutFile to fd 1 ...
	I1008 14:11:04.544636  560444 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:11:04.544648  560444 out.go:374] Setting ErrFile to fd 2...
	I1008 14:11:04.544652  560444 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:11:04.545009  560444 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-513010/.minikube/bin
	I1008 14:11:04.545471  560444 out.go:368] Setting JSON to false
	I1008 14:11:04.546506  560444 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":6814,"bootTime":1759925851,"procs":233,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 14:11:04.546612  560444 start.go:141] virtualization: kvm guest
	I1008 14:11:04.549145  560444 out.go:179] * [functional-686950] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1008 14:11:04.550544  560444 notify.go:220] Checking for updates...
	I1008 14:11:04.551882  560444 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 14:11:04.553375  560444 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 14:11:04.554633  560444 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-513010/kubeconfig
	I1008 14:11:04.555825  560444 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-513010/.minikube
	I1008 14:11:04.557245  560444 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 14:11:04.558615  560444 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 14:11:04.560770  560444 config.go:182] Loaded profile config "functional-686950": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1008 14:11:04.561550  560444 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 14:11:04.586123  560444 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1008 14:11:04.586224  560444 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 14:11:04.652809  560444 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-08 14:11:04.641702408 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 14:11:04.652968  560444 docker.go:318] overlay module found
	I1008 14:11:04.655624  560444 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1008 14:11:04.657126  560444 start.go:305] selected driver: docker
	I1008 14:11:04.657146  560444 start.go:925] validating driver "docker" against &{Name:functional-686950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-686950 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 14:11:04.657275  560444 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 14:11:04.659311  560444 out.go:203] 
	W1008 14:11:04.660471  560444 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1008 14:11:04.661666  560444 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (13.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-686950 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-686950 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-kqslj" [d095b41b-5cc1-40df-9932-7a196d50f95c] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-kqslj" [d095b41b-5cc1-40df-9932-7a196d50f95c] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 13.003978435s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:30464
functional_test.go:1680: http://192.168.49.2:30464: success! body:
Request served by hello-node-connect-7d85dfc575-kqslj

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:30464
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (13.81s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (32.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [fe3a2783-0328-42da-8b31-1d0d29ed7c49] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003522014s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-686950 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-686950 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-686950 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-686950 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [da823813-dec5-4a2c-8581-d1feafa96e6c] Pending
helpers_test.go:352: "sp-pod" [da823813-dec5-4a2c-8581-d1feafa96e6c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [da823813-dec5-4a2c-8581-d1feafa96e6c] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003377138s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-686950 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-686950 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-686950 delete -f testdata/storage-provisioner/pod.yaml: (1.492379014s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-686950 apply -f testdata/storage-provisioner/pod.yaml
I1008 14:11:22.984187  516787 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [19bad066-d937-4867-9380-383be8aa5ca3] Pending
helpers_test.go:352: "sp-pod" [19bad066-d937-4867-9380-383be8aa5ca3] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [19bad066-d937-4867-9380-383be8aa5ca3] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.004159519s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-686950 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (32.43s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 ssh -n functional-686950 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 cp functional-686950:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd897319278/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 ssh -n functional-686950 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 ssh -n functional-686950 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (18.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-686950 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-9qlbm" [a31bb0f7-8e40-43dd-9027-f4a99b8d2919] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-9qlbm" [a31bb0f7-8e40-43dd-9027-f4a99b8d2919] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 15.004288785s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-686950 exec mysql-5bb876957f-9qlbm -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-686950 exec mysql-5bb876957f-9qlbm -- mysql -ppassword -e "show databases;": exit status 1 (113.04771ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1008 14:11:37.559472  516787 retry.go:31] will retry after 1.095231173s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-686950 exec mysql-5bb876957f-9qlbm -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-686950 exec mysql-5bb876957f-9qlbm -- mysql -ppassword -e "show databases;": exit status 1 (113.665159ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1008 14:11:38.768818  516787 retry.go:31] will retry after 1.585032312s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-686950 exec mysql-5bb876957f-9qlbm -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (18.36s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/516787/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 ssh "sudo cat /etc/test/nested/copy/516787/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/516787.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 ssh "sudo cat /etc/ssl/certs/516787.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/516787.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 ssh "sudo cat /usr/share/ca-certificates/516787.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/5167872.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 ssh "sudo cat /etc/ssl/certs/5167872.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/5167872.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 ssh "sudo cat /usr/share/ca-certificates/5167872.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-686950 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-686950 ssh "sudo systemctl is-active docker": exit status 1 (283.781964ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-686950 ssh "sudo systemctl is-active crio": exit status 1 (300.004257ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-686950 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-686950 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-6xf9n" [b1087554-0d2a-4872-905b-c6f2df6afbfd] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-6xf9n" [b1087554-0d2a-4872-905b-c6f2df6afbfd] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.004338062s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.20s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "398.485419ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "64.871184ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-686950 /tmp/TestFunctionalparallelMountCmdany-port2042201304/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1759932662593033293" to /tmp/TestFunctionalparallelMountCmdany-port2042201304/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1759932662593033293" to /tmp/TestFunctionalparallelMountCmdany-port2042201304/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1759932662593033293" to /tmp/TestFunctionalparallelMountCmdany-port2042201304/001/test-1759932662593033293
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-686950 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (332.126729ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1008 14:11:02.925496  516787 retry.go:31] will retry after 746.264512ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  8 14:11 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  8 14:11 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  8 14:11 test-1759932662593033293
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 ssh cat /mount-9p/test-1759932662593033293
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-686950 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [20bf25ae-9347-41b9-9291-6b7027ab449a] Pending
helpers_test.go:352: "busybox-mount" [20bf25ae-9347-41b9-9291-6b7027ab449a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [20bf25ae-9347-41b9-9291-6b7027ab449a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [20bf25ae-9347-41b9-9291-6b7027ab449a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003980736s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-686950 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-686950 /tmp/TestFunctionalparallelMountCmdany-port2042201304/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "393.451109ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "64.413852ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-686950 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-686950
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-686950
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-686950 image ls --format short --alsologtostderr:
I1008 14:11:24.213194  566485 out.go:360] Setting OutFile to fd 1 ...
I1008 14:11:24.213447  566485 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1008 14:11:24.213455  566485 out.go:374] Setting ErrFile to fd 2...
I1008 14:11:24.213459  566485 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1008 14:11:24.213719  566485 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-513010/.minikube/bin
I1008 14:11:24.214373  566485 config.go:182] Loaded profile config "functional-686950": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1008 14:11:24.214469  566485 config.go:182] Loaded profile config "functional-686950": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1008 14:11:24.214848  566485 cli_runner.go:164] Run: docker container inspect functional-686950 --format={{.State.Status}}
I1008 14:11:24.235438  566485 ssh_runner.go:195] Run: systemctl --version
I1008 14:11:24.235500  566485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-686950
I1008 14:11:24.255578  566485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33181 SSHKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/functional-686950/id_rsa Username:docker}
I1008 14:11:24.360182  566485 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-686950 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kicbase/echo-server               │ functional-686950  │ sha256:9056ab │ 2.37MB │
│ docker.io/kicbase/echo-server               │ latest             │ sha256:9056ab │ 2.37MB │
│ docker.io/library/minikube-local-cache-test │ functional-686950  │ sha256:cdf221 │ 991B   │
│ docker.io/library/nginx                     │ alpine             │ sha256:a63019 │ 25.1MB │
│ registry.k8s.io/coredns/coredns             │ v1.12.1            │ sha256:52546a │ 22.4MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:da86e6 │ 315kB  │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:0184c1 │ 298kB  │
│ registry.k8s.io/pause                       │ latest             │ sha256:350b16 │ 72.3kB │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:409467 │ 44.4MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:6e38f4 │ 9.06MB │
│ localhost/my-image                          │ functional-686950  │ sha256:b07076 │ 775kB  │
│ registry.k8s.io/kube-controller-manager     │ v1.34.1            │ sha256:c80c8d │ 22.8MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.1            │ sha256:7dd6aa │ 17.4MB │
│ registry.k8s.io/etcd                        │ 3.6.4-0            │ sha256:5f1f52 │ 74.3MB │
│ registry.k8s.io/kube-apiserver              │ v1.34.1            │ sha256:c3994b │ 27.1MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.1            │ sha256:fc2517 │ 26MB   │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:cd073f │ 320kB  │
│ docker.io/library/nginx                     │ latest             │ sha256:07ccdb │ 62.7MB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:56cc51 │ 2.4MB  │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-686950 image ls --format table --alsologtostderr:
I1008 14:11:27.605302  567082 out.go:360] Setting OutFile to fd 1 ...
I1008 14:11:27.605578  567082 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1008 14:11:27.605587  567082 out.go:374] Setting ErrFile to fd 2...
I1008 14:11:27.605592  567082 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1008 14:11:27.605834  567082 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-513010/.minikube/bin
I1008 14:11:27.606513  567082 config.go:182] Loaded profile config "functional-686950": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1008 14:11:27.606641  567082 config.go:182] Loaded profile config "functional-686950": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1008 14:11:27.607187  567082 cli_runner.go:164] Run: docker container inspect functional-686950 --format={{.State.Status}}
I1008 14:11:27.629170  567082 ssh_runner.go:195] Run: systemctl --version
I1008 14:11:27.629223  567082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-686950
I1008 14:11:27.649564  567082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33181 SSHKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/functional-686950/id_rsa Username:docker}
I1008 14:11:27.758692  567082 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-686950 image ls --format json --alsologtostderr:
[{"id":"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6"],"repoTags":["docker.io/kicbase/echo-server:functional-686950","docker.io/kicbase/echo-server:latest"],"size":"2372971"},{"id":"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"22384805"},{"id":"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"22820214"},{"id":"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha2
56:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"25963718"},{"id":"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"17385568"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22
d29f356092ce206e98765c"],"repoTags":[],"size":"19746404"},{"id":"sha256:cdf2210e68fbc357c62e0117c72ec4a95c9fb06a3a64a9bbe40ea1edf433b214","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-686950"],"size":"991"},{"id":"sha256:a63019652e24443f18e4806cae975591a737588479e88047f8c4e11991819d24","repoDigests":["docker.io/library/nginx@sha256:56c93b2a17e185519a5f420173f899783f0890da60463011c59ddbb904f02093"],"repoTags":["docker.io/library/nginx:alpine"],"size":"25067252"},{"id":"sha256:07ccdb7838758e758a4d52a9761636c385125a327355c0c94a6acff9babff938","repoDigests":["docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6"],"repoTags":["docker.io/library/nginx:latest"],"size":"62706233"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provision
er:v5"],"size":"9058936"},{"id":"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"27061991"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"44375501"},{"id":"sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"75788960"},{"id":"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e47941
9f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"74311308"},{"id":"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"320448"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-686950 image ls --format json --alsologtostderr:
I1008 14:11:27.237797  567026 out.go:360] Setting OutFile to fd 1 ...
I1008 14:11:27.238134  567026 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1008 14:11:27.238145  567026 out.go:374] Setting ErrFile to fd 2...
I1008 14:11:27.238150  567026 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1008 14:11:27.238364  567026 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-513010/.minikube/bin
I1008 14:11:27.239004  567026 config.go:182] Loaded profile config "functional-686950": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1008 14:11:27.239144  567026 config.go:182] Loaded profile config "functional-686950": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1008 14:11:27.239544  567026 cli_runner.go:164] Run: docker container inspect functional-686950 --format={{.State.Status}}
I1008 14:11:27.259412  567026 ssh_runner.go:195] Run: systemctl --version
I1008 14:11:27.259473  567026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-686950
I1008 14:11:27.279223  567026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33181 SSHKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/functional-686950/id_rsa Username:docker}
I1008 14:11:27.393359  567026 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-686950 image ls --format yaml --alsologtostderr:
- id: sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "19746404"
- id: sha256:07ccdb7838758e758a4d52a9761636c385125a327355c0c94a6acff9babff938
repoDigests:
- docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6
repoTags:
- docker.io/library/nginx:latest
size: "62706233"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "27061991"
- id: sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "44375501"
- id: sha256:a63019652e24443f18e4806cae975591a737588479e88047f8c4e11991819d24
repoDigests:
- docker.io/library/nginx@sha256:56c93b2a17e185519a5f420173f899783f0890da60463011c59ddbb904f02093
repoTags:
- docker.io/library/nginx:alpine
size: "25067252"
- id: sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "22384805"
- id: sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "74311308"
- id: sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "25963718"
- id: sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "17385568"
- id: sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "320448"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
repoTags:
- docker.io/kicbase/echo-server:functional-686950
- docker.io/kicbase/echo-server:latest
size: "2372971"
- id: sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "75788960"
- id: sha256:cdf2210e68fbc357c62e0117c72ec4a95c9fb06a3a64a9bbe40ea1edf433b214
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-686950
size: "991"
- id: sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "22820214"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-686950 image ls --format yaml --alsologtostderr:
I1008 14:11:24.446201  566538 out.go:360] Setting OutFile to fd 1 ...
I1008 14:11:24.446481  566538 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1008 14:11:24.446491  566538 out.go:374] Setting ErrFile to fd 2...
I1008 14:11:24.446496  566538 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1008 14:11:24.446690  566538 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-513010/.minikube/bin
I1008 14:11:24.447345  566538 config.go:182] Loaded profile config "functional-686950": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1008 14:11:24.447443  566538 config.go:182] Loaded profile config "functional-686950": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1008 14:11:24.447830  566538 cli_runner.go:164] Run: docker container inspect functional-686950 --format={{.State.Status}}
I1008 14:11:24.468045  566538 ssh_runner.go:195] Run: systemctl --version
I1008 14:11:24.468124  566538 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-686950
I1008 14:11:24.487472  566538 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33181 SSHKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/functional-686950/id_rsa Username:docker}
I1008 14:11:24.595339  566538 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-686950 ssh pgrep buildkitd: exit status 1 (303.733113ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 image build -t localhost/my-image:functional-686950 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-686950 image build -t localhost/my-image:functional-686950 testdata/build --alsologtostderr: (2.933942966s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-686950 image build -t localhost/my-image:functional-686950 testdata/build --alsologtostderr:
I1008 14:11:24.998596  566721 out.go:360] Setting OutFile to fd 1 ...
I1008 14:11:24.998930  566721 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1008 14:11:24.998946  566721 out.go:374] Setting ErrFile to fd 2...
I1008 14:11:24.998954  566721 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1008 14:11:24.999237  566721 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-513010/.minikube/bin
I1008 14:11:25.000029  566721 config.go:182] Loaded profile config "functional-686950": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1008 14:11:25.000905  566721 config.go:182] Loaded profile config "functional-686950": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1008 14:11:25.001739  566721 cli_runner.go:164] Run: docker container inspect functional-686950 --format={{.State.Status}}
I1008 14:11:25.023297  566721 ssh_runner.go:195] Run: systemctl --version
I1008 14:11:25.023349  566721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-686950
I1008 14:11:25.045040  566721 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33181 SSHKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/functional-686950/id_rsa Username:docker}
I1008 14:11:25.155100  566721 build_images.go:161] Building image from path: /tmp/build.2720176752.tar
I1008 14:11:25.155184  566721 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1008 14:11:25.164950  566721 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2720176752.tar
I1008 14:11:25.169015  566721 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2720176752.tar: stat -c "%s %y" /var/lib/minikube/build/build.2720176752.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2720176752.tar': No such file or directory
I1008 14:11:25.169045  566721 ssh_runner.go:362] scp /tmp/build.2720176752.tar --> /var/lib/minikube/build/build.2720176752.tar (3072 bytes)
I1008 14:11:25.190272  566721 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2720176752
I1008 14:11:25.199214  566721 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2720176752 -xf /var/lib/minikube/build/build.2720176752.tar
I1008 14:11:25.209933  566721 containerd.go:394] Building image: /var/lib/minikube/build/build.2720176752
I1008 14:11:25.210029  566721 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2720176752 --local dockerfile=/var/lib/minikube/build/build.2720176752 --output type=image,name=localhost/my-image:functional-686950
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.1s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.8s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:ef194176669462455244b7ca436e5a28b0b75824ff6a58a597bdf0f1b2d119d4 done
#8 exporting config sha256:b070762e05ed12055a273c0a95a1d24972ca79386489e27214f537bdcc5d6c37 done
#8 naming to localhost/my-image:functional-686950 done
#8 DONE 0.1s
I1008 14:11:27.844234  566721 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2720176752 --local dockerfile=/var/lib/minikube/build/build.2720176752 --output type=image,name=localhost/my-image:functional-686950: (2.634173057s)
I1008 14:11:27.844319  566721 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2720176752
I1008 14:11:27.854532  566721 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2720176752.tar
I1008 14:11:27.865210  566721 build_images.go:217] Built localhost/my-image:functional-686950 from /tmp/build.2720176752.tar
I1008 14:11:27.865242  566721 build_images.go:133] succeeded building to: functional-686950
I1008 14:11:27.865248  566721 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-686950
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 image load --daemon kicbase/echo-server:functional-686950 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 image load --daemon kicbase/echo-server:functional-686950 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-686950
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 image load --daemon kicbase/echo-server:functional-686950 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 image save kicbase/echo-server:functional-686950 /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 image rm kicbase/echo-server:functional-686950 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-686950
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 image save --daemon kicbase/echo-server:functional-686950 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-686950
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-686950 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-686950 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-686950 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 562686: os: process already finished
helpers_test.go:519: unable to terminate pid 562192: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-686950 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-686950 /tmp/TestFunctionalparallelMountCmdspecific-port3567822875/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-686950 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (401.131737ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1008 14:11:10.211651  516787 retry.go:31] will retry after 317.761021ms: exit status 1
I1008 14:11:10.244276  516787 detect.go:223] nested VM detected
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-686950 /tmp/TestFunctionalparallelMountCmdspecific-port3567822875/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-686950 ssh "sudo umount -f /mount-9p": exit status 1 (302.391177ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-686950 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-686950 /tmp/TestFunctionalparallelMountCmdspecific-port3567822875/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 service list -o json
functional_test.go:1504: Took "380.40563ms" to run "out/minikube-linux-amd64 -p functional-686950 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-686950 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-686950 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [259b2da5-994a-4af0-89dd-eb5813449a74] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E1008 14:11:10.610585  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/addons-447971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "nginx-svc" [259b2da5-994a-4af0-89dd-eb5813449a74] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.003949455s
I1008 14:11:21.542679  516787 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:32236
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32236
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-686950 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2135979606/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-686950 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2135979606/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-686950 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2135979606/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-686950 ssh "findmnt -T" /mount1: exit status 1 (407.9657ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1008 14:11:12.089590  516787 retry.go:31] will retry after 267.867575ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-686950 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-686950 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2135979606/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-686950 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2135979606/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-686950 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2135979606/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-686950 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
2025/10/08 14:11:21 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.109.170.39 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-686950 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 update-context --alsologtostderr -v=2
E1008 14:11:31.092646  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/addons-447971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-686950 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-686950
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-686950
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-686950
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (113.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1008 14:12:12.054856  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/addons-447971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:13:33.976570  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/addons-447971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-377523 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (1m52.893765688s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (113.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-377523 kubectl -- rollout status deployment/busybox: (2.757830335s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 kubectl -- exec busybox-7b57f96db7-bfxdg -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 kubectl -- exec busybox-7b57f96db7-jhss2 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 kubectl -- exec busybox-7b57f96db7-klzbc -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 kubectl -- exec busybox-7b57f96db7-bfxdg -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 kubectl -- exec busybox-7b57f96db7-jhss2 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 kubectl -- exec busybox-7b57f96db7-klzbc -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 kubectl -- exec busybox-7b57f96db7-bfxdg -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 kubectl -- exec busybox-7b57f96db7-jhss2 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 kubectl -- exec busybox-7b57f96db7-klzbc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 kubectl -- exec busybox-7b57f96db7-bfxdg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 kubectl -- exec busybox-7b57f96db7-bfxdg -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 kubectl -- exec busybox-7b57f96db7-jhss2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 kubectl -- exec busybox-7b57f96db7-jhss2 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 kubectl -- exec busybox-7b57f96db7-klzbc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 kubectl -- exec busybox-7b57f96db7-klzbc -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (23.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-377523 node add --alsologtostderr -v 5: (23.000025483s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (23.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-377523 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (17.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 cp testdata/cp-test.txt ha-377523:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 ssh -n ha-377523 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 cp ha-377523:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3862720536/001/cp-test_ha-377523.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 ssh -n ha-377523 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 cp ha-377523:/home/docker/cp-test.txt ha-377523-m02:/home/docker/cp-test_ha-377523_ha-377523-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 ssh -n ha-377523 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 ssh -n ha-377523-m02 "sudo cat /home/docker/cp-test_ha-377523_ha-377523-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 cp ha-377523:/home/docker/cp-test.txt ha-377523-m03:/home/docker/cp-test_ha-377523_ha-377523-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 ssh -n ha-377523 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 ssh -n ha-377523-m03 "sudo cat /home/docker/cp-test_ha-377523_ha-377523-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 cp ha-377523:/home/docker/cp-test.txt ha-377523-m04:/home/docker/cp-test_ha-377523_ha-377523-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 ssh -n ha-377523 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 ssh -n ha-377523-m04 "sudo cat /home/docker/cp-test_ha-377523_ha-377523-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 cp testdata/cp-test.txt ha-377523-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 ssh -n ha-377523-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 cp ha-377523-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3862720536/001/cp-test_ha-377523-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 ssh -n ha-377523-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 cp ha-377523-m02:/home/docker/cp-test.txt ha-377523:/home/docker/cp-test_ha-377523-m02_ha-377523.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 ssh -n ha-377523-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 ssh -n ha-377523 "sudo cat /home/docker/cp-test_ha-377523-m02_ha-377523.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 cp ha-377523-m02:/home/docker/cp-test.txt ha-377523-m03:/home/docker/cp-test_ha-377523-m02_ha-377523-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 ssh -n ha-377523-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 ssh -n ha-377523-m03 "sudo cat /home/docker/cp-test_ha-377523-m02_ha-377523-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 cp ha-377523-m02:/home/docker/cp-test.txt ha-377523-m04:/home/docker/cp-test_ha-377523-m02_ha-377523-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 ssh -n ha-377523-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 ssh -n ha-377523-m04 "sudo cat /home/docker/cp-test_ha-377523-m02_ha-377523-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 cp testdata/cp-test.txt ha-377523-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 ssh -n ha-377523-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 cp ha-377523-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3862720536/001/cp-test_ha-377523-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 ssh -n ha-377523-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 cp ha-377523-m03:/home/docker/cp-test.txt ha-377523:/home/docker/cp-test_ha-377523-m03_ha-377523.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 ssh -n ha-377523-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 ssh -n ha-377523 "sudo cat /home/docker/cp-test_ha-377523-m03_ha-377523.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 cp ha-377523-m03:/home/docker/cp-test.txt ha-377523-m02:/home/docker/cp-test_ha-377523-m03_ha-377523-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 ssh -n ha-377523-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 ssh -n ha-377523-m02 "sudo cat /home/docker/cp-test_ha-377523-m03_ha-377523-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 cp ha-377523-m03:/home/docker/cp-test.txt ha-377523-m04:/home/docker/cp-test_ha-377523-m03_ha-377523-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 ssh -n ha-377523-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 ssh -n ha-377523-m04 "sudo cat /home/docker/cp-test_ha-377523-m03_ha-377523-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 cp testdata/cp-test.txt ha-377523-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 ssh -n ha-377523-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 cp ha-377523-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3862720536/001/cp-test_ha-377523-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 ssh -n ha-377523-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 cp ha-377523-m04:/home/docker/cp-test.txt ha-377523:/home/docker/cp-test_ha-377523-m04_ha-377523.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 ssh -n ha-377523-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 ssh -n ha-377523 "sudo cat /home/docker/cp-test_ha-377523-m04_ha-377523.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 cp ha-377523-m04:/home/docker/cp-test.txt ha-377523-m02:/home/docker/cp-test_ha-377523-m04_ha-377523-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 ssh -n ha-377523-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 ssh -n ha-377523-m02 "sudo cat /home/docker/cp-test_ha-377523-m04_ha-377523-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 cp ha-377523-m04:/home/docker/cp-test.txt ha-377523-m03:/home/docker/cp-test_ha-377523-m04_ha-377523-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 ssh -n ha-377523-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 ssh -n ha-377523-m03 "sudo cat /home/docker/cp-test_ha-377523-m04_ha-377523-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (17.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-377523 node stop m02 --alsologtostderr -v 5: (11.944808733s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-377523 status --alsologtostderr -v 5: exit status 7 (736.845191ms)

                                                
                                                
-- stdout --
	ha-377523
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-377523-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-377523-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-377523-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 14:14:38.501401  588653 out.go:360] Setting OutFile to fd 1 ...
	I1008 14:14:38.501686  588653 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:14:38.501696  588653 out.go:374] Setting ErrFile to fd 2...
	I1008 14:14:38.501701  588653 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:14:38.501905  588653 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-513010/.minikube/bin
	I1008 14:14:38.502127  588653 out.go:368] Setting JSON to false
	I1008 14:14:38.502164  588653 mustload.go:65] Loading cluster: ha-377523
	I1008 14:14:38.502314  588653 notify.go:220] Checking for updates...
	I1008 14:14:38.502607  588653 config.go:182] Loaded profile config "ha-377523": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1008 14:14:38.502623  588653 status.go:174] checking status of ha-377523 ...
	I1008 14:14:38.503139  588653 cli_runner.go:164] Run: docker container inspect ha-377523 --format={{.State.Status}}
	I1008 14:14:38.523768  588653 status.go:371] ha-377523 host status = "Running" (err=<nil>)
	I1008 14:14:38.523799  588653 host.go:66] Checking if "ha-377523" exists ...
	I1008 14:14:38.524156  588653 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-377523
	I1008 14:14:38.544799  588653 host.go:66] Checking if "ha-377523" exists ...
	I1008 14:14:38.545146  588653 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 14:14:38.545222  588653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-377523
	I1008 14:14:38.563998  588653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33186 SSHKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/ha-377523/id_rsa Username:docker}
	I1008 14:14:38.666789  588653 ssh_runner.go:195] Run: systemctl --version
	I1008 14:14:38.673373  588653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 14:14:38.687051  588653 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 14:14:38.748963  588653 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-08 14:14:38.738327758 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 14:14:38.749770  588653 kubeconfig.go:125] found "ha-377523" server: "https://192.168.49.254:8443"
	I1008 14:14:38.749810  588653 api_server.go:166] Checking apiserver status ...
	I1008 14:14:38.749864  588653 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:14:38.763702  588653 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1449/cgroup
	W1008 14:14:38.773744  588653 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1449/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1008 14:14:38.773833  588653 ssh_runner.go:195] Run: ls
	I1008 14:14:38.778457  588653 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1008 14:14:38.783078  588653 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1008 14:14:38.783105  588653 status.go:463] ha-377523 apiserver status = Running (err=<nil>)
	I1008 14:14:38.783124  588653 status.go:176] ha-377523 status: &{Name:ha-377523 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1008 14:14:38.783141  588653 status.go:174] checking status of ha-377523-m02 ...
	I1008 14:14:38.783395  588653 cli_runner.go:164] Run: docker container inspect ha-377523-m02 --format={{.State.Status}}
	I1008 14:14:38.803416  588653 status.go:371] ha-377523-m02 host status = "Stopped" (err=<nil>)
	I1008 14:14:38.803439  588653 status.go:384] host is not running, skipping remaining checks
	I1008 14:14:38.803447  588653 status.go:176] ha-377523-m02 status: &{Name:ha-377523-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1008 14:14:38.803474  588653 status.go:174] checking status of ha-377523-m03 ...
	I1008 14:14:38.803750  588653 cli_runner.go:164] Run: docker container inspect ha-377523-m03 --format={{.State.Status}}
	I1008 14:14:38.822806  588653 status.go:371] ha-377523-m03 host status = "Running" (err=<nil>)
	I1008 14:14:38.822835  588653 host.go:66] Checking if "ha-377523-m03" exists ...
	I1008 14:14:38.823242  588653 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-377523-m03
	I1008 14:14:38.843026  588653 host.go:66] Checking if "ha-377523-m03" exists ...
	I1008 14:14:38.843312  588653 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 14:14:38.843350  588653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-377523-m03
	I1008 14:14:38.864218  588653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33196 SSHKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/ha-377523-m03/id_rsa Username:docker}
	I1008 14:14:38.967629  588653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 14:14:38.981305  588653 kubeconfig.go:125] found "ha-377523" server: "https://192.168.49.254:8443"
	I1008 14:14:38.981338  588653 api_server.go:166] Checking apiserver status ...
	I1008 14:14:38.981403  588653 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:14:38.993350  588653 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1320/cgroup
	W1008 14:14:39.003425  588653 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1320/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1008 14:14:39.003487  588653 ssh_runner.go:195] Run: ls
	I1008 14:14:39.007457  588653 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1008 14:14:39.011882  588653 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1008 14:14:39.011906  588653 status.go:463] ha-377523-m03 apiserver status = Running (err=<nil>)
	I1008 14:14:39.011914  588653 status.go:176] ha-377523-m03 status: &{Name:ha-377523-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1008 14:14:39.011928  588653 status.go:174] checking status of ha-377523-m04 ...
	I1008 14:14:39.012197  588653 cli_runner.go:164] Run: docker container inspect ha-377523-m04 --format={{.State.Status}}
	I1008 14:14:39.030584  588653 status.go:371] ha-377523-m04 host status = "Running" (err=<nil>)
	I1008 14:14:39.030628  588653 host.go:66] Checking if "ha-377523-m04" exists ...
	I1008 14:14:39.030884  588653 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-377523-m04
	I1008 14:14:39.048639  588653 host.go:66] Checking if "ha-377523-m04" exists ...
	I1008 14:14:39.048947  588653 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 14:14:39.049011  588653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-377523-m04
	I1008 14:14:39.067774  588653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33201 SSHKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/ha-377523-m04/id_rsa Username:docker}
	I1008 14:14:39.170287  588653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 14:14:39.184047  588653 status.go:176] ha-377523-m04 status: &{Name:ha-377523-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (9.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-377523 node start m02 --alsologtostderr -v 5: (8.307361844s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (9.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (98.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-377523 stop --alsologtostderr -v 5: (36.992880401s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 start --wait true --alsologtostderr -v 5
E1008 14:15:50.106723  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/addons-447971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:16:01.659143  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/functional-686950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:16:01.665690  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/functional-686950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:16:01.677106  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/functional-686950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:16:01.698585  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/functional-686950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:16:01.740141  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/functional-686950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:16:01.821673  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/functional-686950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:16:01.983291  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/functional-686950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:16:02.304931  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/functional-686950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:16:02.947010  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/functional-686950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:16:04.229228  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/functional-686950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:16:06.791146  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/functional-686950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:16:11.912995  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/functional-686950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:16:17.818159  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/addons-447971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:16:22.154397  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/functional-686950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-377523 start --wait true --alsologtostderr -v 5: (1m1.215123018s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (98.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-377523 node delete m03 --alsologtostderr -v 5: (8.43580679s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 stop --alsologtostderr -v 5
E1008 14:16:42.636303  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/functional-686950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-377523 stop --alsologtostderr -v 5: (35.853006666s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-377523 status --alsologtostderr -v 5: exit status 7 (108.696068ms)

                                                
                                                
-- stdout --
	ha-377523
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-377523-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-377523-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 14:17:14.388369  605332 out.go:360] Setting OutFile to fd 1 ...
	I1008 14:17:14.388496  605332 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:17:14.388507  605332 out.go:374] Setting ErrFile to fd 2...
	I1008 14:17:14.388514  605332 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:17:14.388763  605332 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-513010/.minikube/bin
	I1008 14:17:14.388996  605332 out.go:368] Setting JSON to false
	I1008 14:17:14.389037  605332 mustload.go:65] Loading cluster: ha-377523
	I1008 14:17:14.389191  605332 notify.go:220] Checking for updates...
	I1008 14:17:14.389464  605332 config.go:182] Loaded profile config "ha-377523": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1008 14:17:14.389483  605332 status.go:174] checking status of ha-377523 ...
	I1008 14:17:14.389955  605332 cli_runner.go:164] Run: docker container inspect ha-377523 --format={{.State.Status}}
	I1008 14:17:14.408503  605332 status.go:371] ha-377523 host status = "Stopped" (err=<nil>)
	I1008 14:17:14.408547  605332 status.go:384] host is not running, skipping remaining checks
	I1008 14:17:14.408566  605332 status.go:176] ha-377523 status: &{Name:ha-377523 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1008 14:17:14.408630  605332 status.go:174] checking status of ha-377523-m02 ...
	I1008 14:17:14.408948  605332 cli_runner.go:164] Run: docker container inspect ha-377523-m02 --format={{.State.Status}}
	I1008 14:17:14.427850  605332 status.go:371] ha-377523-m02 host status = "Stopped" (err=<nil>)
	I1008 14:17:14.427874  605332 status.go:384] host is not running, skipping remaining checks
	I1008 14:17:14.427882  605332 status.go:176] ha-377523-m02 status: &{Name:ha-377523-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1008 14:17:14.427917  605332 status.go:174] checking status of ha-377523-m04 ...
	I1008 14:17:14.428197  605332 cli_runner.go:164] Run: docker container inspect ha-377523-m04 --format={{.State.Status}}
	I1008 14:17:14.446062  605332 status.go:371] ha-377523-m04 host status = "Stopped" (err=<nil>)
	I1008 14:17:14.446087  605332 status.go:384] host is not running, skipping remaining checks
	I1008 14:17:14.446111  605332 status.go:176] ha-377523-m04 status: &{Name:ha-377523-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (54.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1008 14:17:23.598120  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/functional-686950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-377523 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (53.557962958s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (54.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (44.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 node add --control-plane --alsologtostderr -v 5
E1008 14:18:45.520219  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/functional-686950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-377523 node add --control-plane --alsologtostderr -v 5: (43.963820071s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-377523 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (44.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.94s)

                                                
                                    
x
+
TestJSONOutput/start/Command (40.6s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-344419 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-344419 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (40.600489211s)
--- PASS: TestJSONOutput/start/Command (40.60s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.78s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-344419 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.78s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-344419 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.79s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-344419 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-344419 --output=json --user=testUser: (5.789567188s)
--- PASS: TestJSONOutput/stop/Command (5.79s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-256918 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-256918 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (69.758428ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"47eb66a8-2b92-494e-97a6-6d22fb7d73fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-256918] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0e8b631b-8d4f-4f7b-bfe3-8b3cd0b25142","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21681"}}
	{"specversion":"1.0","id":"feb0897d-abbe-4407-b7cc-97a9a7263483","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b51c660d-3379-44e6-b055-46796082fb72","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21681-513010/kubeconfig"}}
	{"specversion":"1.0","id":"4b429320-71c3-4589-b590-faf7572322a6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-513010/.minikube"}}
	{"specversion":"1.0","id":"22e78a20-449a-46cf-9bef-a7d728706613","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"a2a89b6a-be40-43a4-8ca6-2e459ceb6f84","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"60ef5405-22fe-48ce-bcef-8254a62bacad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-256918" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-256918
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (29.18s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-920008 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-920008 --network=: (27.019045934s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-920008" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-920008
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-920008: (2.139974209s)
--- PASS: TestKicCustomNetwork/create_custom_network (29.18s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (24.55s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-310574 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-310574 --network=bridge: (22.566619877s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-310574" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-310574
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-310574: (1.967390248s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (24.55s)

                                                
                                    
x
+
TestKicExistingNetwork (25.36s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1008 14:20:49.366619  516787 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1008 14:20:49.384618  516787 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1008 14:20:49.384696  516787 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1008 14:20:49.384716  516787 cli_runner.go:164] Run: docker network inspect existing-network
W1008 14:20:49.401767  516787 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1008 14:20:49.401809  516787 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1008 14:20:49.401830  516787 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1008 14:20:49.401967  516787 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1008 14:20:49.420045  516787 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-579739baec73 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:82:69:9e:8b:7e:c1} reservation:<nil>}
I1008 14:20:49.420413  516787 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001825010}
I1008 14:20:49.420448  516787 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1008 14:20:49.420518  516787 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1008 14:20:49.481903  516787 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-927148 --network=existing-network
E1008 14:20:50.107582  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/addons-447971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:21:01.659174  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/functional-686950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-927148 --network=existing-network: (23.235927177s)
helpers_test.go:175: Cleaning up "existing-network-927148" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-927148
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-927148: (1.972050211s)
I1008 14:21:14.708279  516787 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (25.36s)

                                                
                                    
x
+
TestKicCustomSubnet (25.1s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-995204 --subnet=192.168.60.0/24
E1008 14:21:29.368987  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/functional-686950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-995204 --subnet=192.168.60.0/24: (22.92538149s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-995204 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-995204" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-995204
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-995204: (2.154610422s)
--- PASS: TestKicCustomSubnet (25.10s)

                                                
                                    
x
+
TestKicStaticIP (27.94s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-571426 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-571426 --static-ip=192.168.200.200: (25.685115023s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-571426 ip
helpers_test.go:175: Cleaning up "static-ip-571426" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-571426
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-571426: (2.116253496s)
--- PASS: TestKicStaticIP (27.94s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (46.77s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-050192 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-050192 --driver=docker  --container-runtime=containerd: (20.631773799s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-053267 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-053267 --driver=docker  --container-runtime=containerd: (20.188147148s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-050192
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-053267
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-053267" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-053267
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-053267: (2.329413557s)
helpers_test.go:175: Cleaning up "first-050192" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-050192
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-050192: (2.379725127s)
--- PASS: TestMinikubeProfile (46.77s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.17s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-785074 --memory=3072 --mount-string /tmp/TestMountStartserial3394702064/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-785074 --memory=3072 --mount-string /tmp/TestMountStartserial3394702064/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.165463431s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.17s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-785074 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.33s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-801712 --memory=3072 --mount-string /tmp/TestMountStartserial3394702064/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-801712 --memory=3072 --mount-string /tmp/TestMountStartserial3394702064/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.328624479s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.33s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-801712 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.67s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-785074 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-785074 --alsologtostderr -v=5: (1.670577726s)
--- PASS: TestMountStart/serial/DeleteFirst (1.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-801712 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-801712
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-801712: (1.191676448s)
--- PASS: TestMountStart/serial/Stop (1.19s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.05s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-801712
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-801712: (6.051167981s)
--- PASS: TestMountStart/serial/RestartStopped (7.05s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-801712 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (69.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-439307 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-439307 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m8.547348351s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-439307 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (69.05s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-439307 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-439307 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-439307 -- rollout status deployment/busybox: (2.315984391s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-439307 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-439307 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-439307 -- exec busybox-7b57f96db7-9qspn -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-439307 -- exec busybox-7b57f96db7-n6rvn -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-439307 -- exec busybox-7b57f96db7-9qspn -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-439307 -- exec busybox-7b57f96db7-n6rvn -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-439307 -- exec busybox-7b57f96db7-9qspn -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-439307 -- exec busybox-7b57f96db7-n6rvn -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.75s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-439307 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-439307 -- exec busybox-7b57f96db7-9qspn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-439307 -- exec busybox-7b57f96db7-9qspn -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-439307 -- exec busybox-7b57f96db7-n6rvn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-439307 -- exec busybox-7b57f96db7-n6rvn -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.75s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.70s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-439307 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-439307 cp testdata/cp-test.txt multinode-439307:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-439307 ssh -n multinode-439307 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-439307 cp multinode-439307:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1350338038/001/cp-test_multinode-439307.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-439307 ssh -n multinode-439307 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-439307 cp multinode-439307:/home/docker/cp-test.txt multinode-439307-m02:/home/docker/cp-test_multinode-439307_multinode-439307-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-439307 ssh -n multinode-439307 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-439307 ssh -n multinode-439307-m02 "sudo cat /home/docker/cp-test_multinode-439307_multinode-439307-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-439307 cp multinode-439307:/home/docker/cp-test.txt multinode-439307-m03:/home/docker/cp-test_multinode-439307_multinode-439307-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-439307 ssh -n multinode-439307 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-439307 ssh -n multinode-439307-m03 "sudo cat /home/docker/cp-test_multinode-439307_multinode-439307-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-439307 cp testdata/cp-test.txt multinode-439307-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-439307 ssh -n multinode-439307-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-439307 cp multinode-439307-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1350338038/001/cp-test_multinode-439307-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-439307 ssh -n multinode-439307-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-439307 cp multinode-439307-m02:/home/docker/cp-test.txt multinode-439307:/home/docker/cp-test_multinode-439307-m02_multinode-439307.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-439307 ssh -n multinode-439307-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-439307 ssh -n multinode-439307 "sudo cat /home/docker/cp-test_multinode-439307-m02_multinode-439307.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-439307 cp multinode-439307-m02:/home/docker/cp-test.txt multinode-439307-m03:/home/docker/cp-test_multinode-439307-m02_multinode-439307-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-439307 ssh -n multinode-439307-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-439307 ssh -n multinode-439307-m03 "sudo cat /home/docker/cp-test_multinode-439307-m02_multinode-439307-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-439307 cp testdata/cp-test.txt multinode-439307-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-439307 ssh -n multinode-439307-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-439307 cp multinode-439307-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1350338038/001/cp-test_multinode-439307-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-439307 ssh -n multinode-439307-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-439307 cp multinode-439307-m03:/home/docker/cp-test.txt multinode-439307:/home/docker/cp-test_multinode-439307-m03_multinode-439307.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-439307 ssh -n multinode-439307-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-439307 ssh -n multinode-439307 "sudo cat /home/docker/cp-test_multinode-439307-m03_multinode-439307.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-439307 cp multinode-439307-m03:/home/docker/cp-test.txt multinode-439307-m02:/home/docker/cp-test_multinode-439307-m03_multinode-439307-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-439307 ssh -n multinode-439307-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-439307 ssh -n multinode-439307-m02 "sudo cat /home/docker/cp-test_multinode-439307-m03_multinode-439307-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.16s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-439307 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-439307 node stop m03: (1.245909396s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-439307 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-439307 status: exit status 7 (515.473498ms)

                                                
                                                
-- stdout --
	multinode-439307
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-439307-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-439307-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-439307 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-439307 status --alsologtostderr: exit status 7 (520.938751ms)

                                                
                                                
-- stdout --
	multinode-439307
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-439307-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-439307-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 14:24:58.220692  669025 out.go:360] Setting OutFile to fd 1 ...
	I1008 14:24:58.220796  669025 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:24:58.220801  669025 out.go:374] Setting ErrFile to fd 2...
	I1008 14:24:58.220805  669025 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:24:58.221057  669025 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-513010/.minikube/bin
	I1008 14:24:58.221241  669025 out.go:368] Setting JSON to false
	I1008 14:24:58.221271  669025 mustload.go:65] Loading cluster: multinode-439307
	I1008 14:24:58.221402  669025 notify.go:220] Checking for updates...
	I1008 14:24:58.221670  669025 config.go:182] Loaded profile config "multinode-439307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1008 14:24:58.221684  669025 status.go:174] checking status of multinode-439307 ...
	I1008 14:24:58.222133  669025 cli_runner.go:164] Run: docker container inspect multinode-439307 --format={{.State.Status}}
	I1008 14:24:58.243263  669025 status.go:371] multinode-439307 host status = "Running" (err=<nil>)
	I1008 14:24:58.243318  669025 host.go:66] Checking if "multinode-439307" exists ...
	I1008 14:24:58.243660  669025 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-439307
	I1008 14:24:58.262395  669025 host.go:66] Checking if "multinode-439307" exists ...
	I1008 14:24:58.262690  669025 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 14:24:58.262729  669025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307
	I1008 14:24:58.281516  669025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33306 SSHKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307/id_rsa Username:docker}
	I1008 14:24:58.384964  669025 ssh_runner.go:195] Run: systemctl --version
	I1008 14:24:58.391657  669025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 14:24:58.405875  669025 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 14:24:58.464193  669025 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-10-08 14:24:58.453417247 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 14:24:58.464793  669025 kubeconfig.go:125] found "multinode-439307" server: "https://192.168.67.2:8443"
	I1008 14:24:58.464835  669025 api_server.go:166] Checking apiserver status ...
	I1008 14:24:58.464881  669025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:24:58.477761  669025 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1367/cgroup
	W1008 14:24:58.487395  669025 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1367/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1008 14:24:58.487460  669025 ssh_runner.go:195] Run: ls
	I1008 14:24:58.492104  669025 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1008 14:24:58.496745  669025 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1008 14:24:58.496778  669025 status.go:463] multinode-439307 apiserver status = Running (err=<nil>)
	I1008 14:24:58.496792  669025 status.go:176] multinode-439307 status: &{Name:multinode-439307 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1008 14:24:58.496827  669025 status.go:174] checking status of multinode-439307-m02 ...
	I1008 14:24:58.497175  669025 cli_runner.go:164] Run: docker container inspect multinode-439307-m02 --format={{.State.Status}}
	I1008 14:24:58.515792  669025 status.go:371] multinode-439307-m02 host status = "Running" (err=<nil>)
	I1008 14:24:58.515829  669025 host.go:66] Checking if "multinode-439307-m02" exists ...
	I1008 14:24:58.516180  669025 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-439307-m02
	I1008 14:24:58.535523  669025 host.go:66] Checking if "multinode-439307-m02" exists ...
	I1008 14:24:58.535873  669025 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 14:24:58.535918  669025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307-m02
	I1008 14:24:58.554565  669025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33311 SSHKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307-m02/id_rsa Username:docker}
	I1008 14:24:58.656800  669025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 14:24:58.670185  669025 status.go:176] multinode-439307-m02 status: &{Name:multinode-439307-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1008 14:24:58.670224  669025 status.go:174] checking status of multinode-439307-m03 ...
	I1008 14:24:58.670518  669025 cli_runner.go:164] Run: docker container inspect multinode-439307-m03 --format={{.State.Status}}
	I1008 14:24:58.689054  669025 status.go:371] multinode-439307-m03 host status = "Stopped" (err=<nil>)
	I1008 14:24:58.689080  669025 status.go:384] host is not running, skipping remaining checks
	I1008 14:24:58.689087  669025 status.go:176] multinode-439307-m03 status: &{Name:multinode-439307-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.28s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-439307 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-439307 node start m03 -v=5 --alsologtostderr: (6.562659645s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-439307 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.31s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (70.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-439307
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-439307
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-439307: (24.923028722s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-439307 --wait=true -v=5 --alsologtostderr
E1008 14:25:50.107720  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/addons-447971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:26:01.659124  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/functional-686950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-439307 --wait=true -v=5 --alsologtostderr: (45.754638836s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-439307
--- PASS: TestMultiNode/serial/RestartKeepsNodes (70.79s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-439307 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-439307 node delete m03: (4.665490709s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-439307 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.28s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-439307 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-439307 stop: (23.742633467s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-439307 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-439307 status: exit status 7 (93.542706ms)

                                                
                                                
-- stdout --
	multinode-439307
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-439307-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-439307 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-439307 status --alsologtostderr: exit status 7 (87.949538ms)

                                                
                                                
-- stdout --
	multinode-439307
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-439307-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 14:26:45.951029  678774 out.go:360] Setting OutFile to fd 1 ...
	I1008 14:26:45.951299  678774 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:26:45.951309  678774 out.go:374] Setting ErrFile to fd 2...
	I1008 14:26:45.951313  678774 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:26:45.951529  678774 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-513010/.minikube/bin
	I1008 14:26:45.951725  678774 out.go:368] Setting JSON to false
	I1008 14:26:45.951754  678774 mustload.go:65] Loading cluster: multinode-439307
	I1008 14:26:45.951853  678774 notify.go:220] Checking for updates...
	I1008 14:26:45.952221  678774 config.go:182] Loaded profile config "multinode-439307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1008 14:26:45.952240  678774 status.go:174] checking status of multinode-439307 ...
	I1008 14:26:45.952717  678774 cli_runner.go:164] Run: docker container inspect multinode-439307 --format={{.State.Status}}
	I1008 14:26:45.970348  678774 status.go:371] multinode-439307 host status = "Stopped" (err=<nil>)
	I1008 14:26:45.970399  678774 status.go:384] host is not running, skipping remaining checks
	I1008 14:26:45.970413  678774 status.go:176] multinode-439307 status: &{Name:multinode-439307 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1008 14:26:45.970481  678774 status.go:174] checking status of multinode-439307-m02 ...
	I1008 14:26:45.970762  678774 cli_runner.go:164] Run: docker container inspect multinode-439307-m02 --format={{.State.Status}}
	I1008 14:26:45.988473  678774 status.go:371] multinode-439307-m02 host status = "Stopped" (err=<nil>)
	I1008 14:26:45.988495  678774 status.go:384] host is not running, skipping remaining checks
	I1008 14:26:45.988502  678774 status.go:176] multinode-439307-m02 status: &{Name:multinode-439307-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.92s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (47.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-439307 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1008 14:27:13.180417  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/addons-447971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-439307 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (46.791886885s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-439307 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (47.41s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (24.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-439307
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-439307-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-439307-m02 --driver=docker  --container-runtime=containerd: exit status 14 (69.656616ms)

                                                
                                                
-- stdout --
	* [multinode-439307-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21681
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21681-513010/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-513010/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-439307-m02' is duplicated with machine name 'multinode-439307-m02' in profile 'multinode-439307'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-439307-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-439307-m03 --driver=docker  --container-runtime=containerd: (22.309342424s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-439307
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-439307: exit status 80 (292.667198ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-439307 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-439307-m03 already exists in multinode-439307-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-439307-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-439307-m03: (1.94416383s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (24.67s)

                                                
                                    
x
+
TestPreload (103.32s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-020977 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-020977 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (45.104308505s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-020977 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-020977 image pull gcr.io/k8s-minikube/busybox: (1.643886261s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-020977
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-020977: (5.603605608s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-020977 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-020977 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (48.253389356s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-020977 image list
helpers_test.go:175: Cleaning up "test-preload-020977" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-020977
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-020977: (2.487655243s)
--- PASS: TestPreload (103.32s)

                                                
                                    
x
+
TestScheduledStopUnix (96.86s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-557758 --memory=3072 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-557758 --memory=3072 --driver=docker  --container-runtime=containerd: (20.504001368s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-557758 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-557758 -n scheduled-stop-557758
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-557758 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1008 14:30:08.465666  516787 retry.go:31] will retry after 148.378µs: open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/scheduled-stop-557758/pid: no such file or directory
I1008 14:30:08.466878  516787 retry.go:31] will retry after 104.107µs: open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/scheduled-stop-557758/pid: no such file or directory
I1008 14:30:08.468071  516787 retry.go:31] will retry after 235.886µs: open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/scheduled-stop-557758/pid: no such file or directory
I1008 14:30:08.469216  516787 retry.go:31] will retry after 348.244µs: open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/scheduled-stop-557758/pid: no such file or directory
I1008 14:30:08.470373  516787 retry.go:31] will retry after 615.518µs: open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/scheduled-stop-557758/pid: no such file or directory
I1008 14:30:08.471510  516787 retry.go:31] will retry after 445.39µs: open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/scheduled-stop-557758/pid: no such file or directory
I1008 14:30:08.472659  516787 retry.go:31] will retry after 752.435µs: open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/scheduled-stop-557758/pid: no such file or directory
I1008 14:30:08.473788  516787 retry.go:31] will retry after 1.850292ms: open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/scheduled-stop-557758/pid: no such file or directory
I1008 14:30:08.476002  516787 retry.go:31] will retry after 2.088835ms: open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/scheduled-stop-557758/pid: no such file or directory
I1008 14:30:08.478152  516787 retry.go:31] will retry after 2.927803ms: open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/scheduled-stop-557758/pid: no such file or directory
I1008 14:30:08.481351  516787 retry.go:31] will retry after 6.736994ms: open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/scheduled-stop-557758/pid: no such file or directory
I1008 14:30:08.488572  516787 retry.go:31] will retry after 9.04431ms: open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/scheduled-stop-557758/pid: no such file or directory
I1008 14:30:08.497760  516787 retry.go:31] will retry after 6.510021ms: open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/scheduled-stop-557758/pid: no such file or directory
I1008 14:30:08.505082  516787 retry.go:31] will retry after 12.4095ms: open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/scheduled-stop-557758/pid: no such file or directory
I1008 14:30:08.518373  516787 retry.go:31] will retry after 24.796849ms: open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/scheduled-stop-557758/pid: no such file or directory
I1008 14:30:08.543692  516787 retry.go:31] will retry after 47.573491ms: open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/scheduled-stop-557758/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-557758 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-557758 -n scheduled-stop-557758
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-557758
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-557758 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1008 14:30:50.106865  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/addons-447971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:31:01.659261  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/functional-686950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-557758
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-557758: exit status 7 (72.121999ms)

                                                
                                                
-- stdout --
	scheduled-stop-557758
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-557758 -n scheduled-stop-557758
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-557758 -n scheduled-stop-557758: exit status 7 (72.31055ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-557758" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-557758
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-557758: (4.864848753s)
--- PASS: TestScheduledStopUnix (96.86s)

                                                
                                    
x
+
TestInsufficientStorage (9.49s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-556934 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-556934 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (7.014911851s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f8a1008c-0ab3-4e32-ae45-cf149898d0b7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-556934] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5a01c104-0448-4753-bded-4334cfcdd34b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21681"}}
	{"specversion":"1.0","id":"fbed76bd-f9e9-4a14-b809-73512b2465cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"afdcec7a-5394-49b1-aeee-33594bf48376","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21681-513010/kubeconfig"}}
	{"specversion":"1.0","id":"d6a92b52-2af4-4b70-81d7-fcfa23b2ccb9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-513010/.minikube"}}
	{"specversion":"1.0","id":"1e23550a-f8e5-41b9-950d-32e2ffaa8170","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"05b65019-e38b-45db-b2bb-7f62749647bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"8de5f830-2af5-4fbb-b3cf-64d79d1fb34d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"85e63a9f-7795-4b4a-a1d5-bf3cc8752524","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"ad218876-734b-4244-afce-feb28854d5ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"6e246583-b82e-4b43-94c5-07aec2ae3910","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"2686c026-2bac-4736-8ab9-1c66e3f1856b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-556934\" primary control-plane node in \"insufficient-storage-556934\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"6ce268b9-7d6f-41b3-bd05-2eb26f56aa45","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1759745255-21703 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"9db2d491-2076-43f7-8e2f-da2c64839078","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"7fe3e079-214a-440b-b7a5-711de9528980","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-556934 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-556934 --output=json --layout=cluster: exit status 7 (293.137425ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-556934","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-556934","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1008 14:31:31.658820  701209 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-556934" does not appear in /home/jenkins/minikube-integration/21681-513010/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-556934 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-556934 --output=json --layout=cluster: exit status 7 (290.624943ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-556934","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-556934","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1008 14:31:31.950338  701318 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-556934" does not appear in /home/jenkins/minikube-integration/21681-513010/kubeconfig
	E1008 14:31:31.961068  701318 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/insufficient-storage-556934/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-556934" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-556934
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-556934: (1.887510449s)
--- PASS: TestInsufficientStorage (9.49s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (48.67s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.2516386297 start -p running-upgrade-938064 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.2516386297 start -p running-upgrade-938064 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (25.578493836s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-938064 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-938064 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (20.419067998s)
helpers_test.go:175: Cleaning up "running-upgrade-938064" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-938064
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-938064: (2.034562026s)
--- PASS: TestRunningBinaryUpgrade (48.67s)

                                                
                                    
x
+
TestKubernetesUpgrade (325.26s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-051336 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-051336 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (26.841371275s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-051336
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-051336: (1.846657378s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-051336 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-051336 status --format={{.Host}}: exit status 7 (82.985411ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-051336 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-051336 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m39.912738442s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-051336 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-051336 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-051336 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 106 (87.603094ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-051336] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21681
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21681-513010/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-513010/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-051336
	    minikube start -p kubernetes-upgrade-051336 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0513362 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-051336 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-051336 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-051336 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (13.782206914s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-051336" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-051336
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-051336: (2.60763046s)
--- PASS: TestKubernetesUpgrade (325.26s)

                                                
                                    
x
+
TestMissingContainerUpgrade (121.85s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.1465887867 start -p missing-upgrade-189204 --memory=3072 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.1465887867 start -p missing-upgrade-189204 --memory=3072 --driver=docker  --container-runtime=containerd: (47.105260167s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-189204
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-189204
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-189204 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-189204 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m11.459282427s)
helpers_test.go:175: Cleaning up "missing-upgrade-189204" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-189204
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-189204: (2.214779615s)
--- PASS: TestMissingContainerUpgrade (121.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-023815 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-023815 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (91.208842ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-023815] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21681
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21681-513010/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-513010/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (32.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-023815 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-023815 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (32.226660747s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-023815 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (32.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (8.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-774397 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-774397 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (918.075903ms)

                                                
                                                
-- stdout --
	* [false-774397] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21681
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21681-513010/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-513010/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 14:31:38.535690  703689 out.go:360] Setting OutFile to fd 1 ...
	I1008 14:31:38.535940  703689 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:31:38.535948  703689 out.go:374] Setting ErrFile to fd 2...
	I1008 14:31:38.535953  703689 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:31:38.536195  703689 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-513010/.minikube/bin
	I1008 14:31:38.536695  703689 out.go:368] Setting JSON to false
	I1008 14:31:38.537731  703689 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":8048,"bootTime":1759925851,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 14:31:38.537850  703689 start.go:141] virtualization: kvm guest
	I1008 14:31:38.612395  703689 out.go:179] * [false-774397] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1008 14:31:38.718692  703689 notify.go:220] Checking for updates...
	I1008 14:31:38.772008  703689 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 14:31:38.991349  703689 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 14:31:39.114045  703689 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-513010/kubeconfig
	I1008 14:31:39.148003  703689 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-513010/.minikube
	I1008 14:31:39.173801  703689 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 14:31:39.176294  703689 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 14:31:39.181477  703689 config.go:182] Loaded profile config "NoKubernetes-023815": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1008 14:31:39.181602  703689 config.go:182] Loaded profile config "force-systemd-env-071674": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1008 14:31:39.181706  703689 config.go:182] Loaded profile config "offline-containerd-925961": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1008 14:31:39.181836  703689 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 14:31:39.207554  703689 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1008 14:31:39.207717  703689 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 14:31:39.266664  703689 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:false NGoroutines:63 SystemTime:2025-10-08 14:31:39.255723078 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 14:31:39.266820  703689 docker.go:318] overlay module found
	I1008 14:31:39.358923  703689 out.go:179] * Using the docker driver based on user configuration
	I1008 14:31:39.386227  703689 start.go:305] selected driver: docker
	I1008 14:31:39.386257  703689 start.go:925] validating driver "docker" against <nil>
	I1008 14:31:39.386276  703689 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 14:31:39.395373  703689 out.go:203] 
	W1008 14:31:39.397653  703689 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1008 14:31:39.401673  703689 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-774397 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-774397

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-774397

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-774397

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-774397

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-774397

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-774397

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-774397

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-774397

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-774397

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-774397

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-774397"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-774397"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-774397"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-774397

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-774397"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-774397"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-774397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-774397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-774397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-774397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-774397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-774397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-774397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-774397" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-774397"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-774397"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-774397"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-774397"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-774397"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-774397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-774397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-774397" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-774397"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-774397"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-774397"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-774397"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-774397"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-774397

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-774397"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-774397"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-774397"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-774397"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-774397"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-774397"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-774397"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-774397"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-774397"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-774397"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-774397"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-774397"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-774397"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-774397"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-774397"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-774397"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-774397"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-774397"

                                                
                                                
----------------------- debugLogs end: false-774397 [took: 7.183442812s] --------------------------------
helpers_test.go:175: Cleaning up "false-774397" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-774397
--- PASS: TestNetworkPlugins/group/false (8.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (14.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-023815 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-023815 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (9.32777496s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-023815 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-023815 status -o json: exit status 2 (327.545476ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-023815","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-023815
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-023815: (5.321544112s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (14.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-023815 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E1008 14:32:24.731082  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/functional-686950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-023815 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (5.207451585s)
--- PASS: TestNoKubernetes/serial/Start (5.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-023815 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-023815 "sudo systemctl is-active --quiet service kubelet": exit status 1 (331.580644ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (7.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:171: (dbg) Done: out/minikube-linux-amd64 profile list: (2.588047305s)
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:181: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (4.69261205s)
--- PASS: TestNoKubernetes/serial/ProfileList (7.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.52s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.52s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (60.06s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.4203320554 start -p stopped-upgrade-678785 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.4203320554 start -p stopped-upgrade-678785 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (30.045748164s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.4203320554 -p stopped-upgrade-678785 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.4203320554 -p stopped-upgrade-678785 stop: (1.263839422s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-678785 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-678785 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (28.750261051s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (60.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-023815
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-023815: (2.569473096s)
--- PASS: TestNoKubernetes/serial/Stop (2.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-023815 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-023815 --driver=docker  --container-runtime=containerd: (6.49994444s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-023815 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-023815 "sudo systemctl is-active --quiet service kubelet": exit status 1 (289.765725ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.23s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-678785
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-678785: (1.231235948s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.23s)

                                                
                                    
x
+
TestPause/serial/Start (40.34s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-779828 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-779828 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (40.341077583s)
--- PASS: TestPause/serial/Start (40.34s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.39s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-779828 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-779828 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.373573756s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.39s)

                                                
                                    
x
+
TestPause/serial/Pause (0.9s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-779828 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.90s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.32s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-779828 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-779828 --output=json --layout=cluster: exit status 2 (318.673ms)

                                                
                                                
-- stdout --
	{"Name":"pause-779828","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-779828","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.32s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.34s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-779828 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-linux-amd64 unpause -p pause-779828 --alsologtostderr -v=5: (1.337188178s)
--- PASS: TestPause/serial/Unpause (1.34s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.84s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-779828 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.84s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.69s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-779828 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-779828 --alsologtostderr -v=5: (2.692270757s)
--- PASS: TestPause/serial/DeletePaused (2.69s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (14.84s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (14.763987143s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-779828
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-779828: exit status 1 (22.401995ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-779828: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (14.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (44.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-774397 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-774397 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (44.065910685s)
--- PASS: TestNetworkPlugins/group/auto/Start (44.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (41.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-774397 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-774397 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (41.011557256s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (41.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-774397 "pgrep -a kubelet"
I1008 14:35:25.109907  516787 config.go:182] Loaded profile config "auto-774397": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-774397 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-t5vvh" [f72d0fa1-f507-4e6d-baa4-f3116a673ed6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-t5vvh" [f72d0fa1-f507-4e6d-baa4-f3116a673ed6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.004879934s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-h28l4" [2ed1db19-c6cc-485a-9d92-983ed69c7ee4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003522402s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-774397 "pgrep -a kubelet"
I1008 14:35:33.142860  516787 config.go:182] Loaded profile config "kindnet-774397": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (8.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-774397 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-hc226" [cbf55d14-585c-4197-b13b-f64c7f4b1294] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-hc226" [cbf55d14-585c-4197-b13b-f64c7f4b1294] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 8.004546761s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (8.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-774397 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-774397 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-774397 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-774397 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-774397 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-774397 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (52.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-774397 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-774397 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (52.827350611s)
--- PASS: TestNetworkPlugins/group/calico/Start (52.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (47.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-774397 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-774397 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (47.479217087s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (47.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-plkdc" [3e56806f-ca60-4203-b38f-d2d2a65e99e8] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-plkdc" [3e56806f-ca60-4203-b38f-d2d2a65e99e8] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004417972s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-774397 "pgrep -a kubelet"
I1008 14:36:50.086583  516787 config.go:182] Loaded profile config "custom-flannel-774397": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-774397 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-m97hk" [c6df6076-c6ea-476a-9b35-c4ca4d20078d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-m97hk" [c6df6076-c6ea-476a-9b35-c4ca4d20078d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.003868086s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-774397 "pgrep -a kubelet"
I1008 14:36:52.074734  516787 config.go:182] Loaded profile config "calico-774397": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (8.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-774397 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-rmqw7" [e49e7c49-eb52-40c5-9fc2-01e10528b676] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-rmqw7" [e49e7c49-eb52-40c5-9fc2-01e10528b676] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 8.00435091s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (8.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-774397 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-774397 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-774397 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-774397 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-774397 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-774397 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (81.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-774397 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-774397 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m21.522085954s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (81.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (64.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-774397 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-774397 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (1m4.606594675s)
--- PASS: TestNetworkPlugins/group/flannel/Start (64.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (76.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-774397 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-774397 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m16.456351745s)
--- PASS: TestNetworkPlugins/group/bridge/Start (76.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (52.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-249470 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-249470 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (52.533290572s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (52.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-j4lgl" [14f2f915-7353-4f2a-af99-8cd0304c0214] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003746171s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-774397 "pgrep -a kubelet"
I1008 14:38:29.661702  516787 config.go:182] Loaded profile config "enable-default-cni-774397": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-774397 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-xmk2t" [0abf548b-5e1a-43d5-9de0-f99d5c924ae1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-xmk2t" [0abf548b-5e1a-43d5-9de0-f99d5c924ae1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.004321515s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-249470 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [55c68227-8575-4f8a-9296-a13541116bb3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [55c68227-8575-4f8a-9296-a13541116bb3] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003330267s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-249470 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-774397 "pgrep -a kubelet"
I1008 14:38:34.232073  516787 config.go:182] Loaded profile config "flannel-774397": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (8.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-774397 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-nj5g9" [308aaa6a-188b-4258-b632-047494511dd8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-nj5g9" [308aaa6a-188b-4258-b632-047494511dd8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 8.006453144s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (8.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-774397 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-774397 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-774397 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-249470 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-249470 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-774397 "pgrep -a kubelet"
I1008 14:38:41.232241  516787 config.go:182] Loaded profile config "bridge-774397": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-774397 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2f4jj" [e5ff0ed8-430d-42a1-a4ab-34d7467a38bc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-2f4jj" [e5ff0ed8-430d-42a1-a4ab-34d7467a38bc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.004466392s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-249470 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-249470 --alsologtostderr -v=3: (12.010338705s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-774397 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-774397 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-774397 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-774397 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-774397 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-774397 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-249470 -n old-k8s-version-249470
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-249470 -n old-k8s-version-249470: exit status 7 (93.137829ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-249470 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (51.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-249470 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-249470 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (50.752551491s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-249470 -n old-k8s-version-249470
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (51.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (45.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-528248 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-528248 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (45.334973166s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (45.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (52.61s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-831830 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-831830 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (52.608047296s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (52.61s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (47.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-081742 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-081742 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (47.578519156s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (47.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-528248 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [e94b2d36-da5b-4c20-bc96-3579d3471d70] Pending
helpers_test.go:352: "busybox" [e94b2d36-da5b-4c20-bc96-3579d3471d70] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [e94b2d36-da5b-4c20-bc96-3579d3471d70] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.003694018s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-528248 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-ksscq" [dfd854e6-c6bf-4bce-9621-81b6fd35223a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003984787s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-ksscq" [dfd854e6-c6bf-4bce-9621-81b6fd35223a] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00372649s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-249470 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.84s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-528248 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-528248 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-528248 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-528248 --alsologtostderr -v=3: (13.964831484s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-249470 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-249470 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-249470 -n old-k8s-version-249470
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-249470 -n old-k8s-version-249470: exit status 2 (369.11494ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-249470 -n old-k8s-version-249470
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-249470 -n old-k8s-version-249470: exit status 2 (359.409531ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-249470 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-249470 -n old-k8s-version-249470
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-249470 -n old-k8s-version-249470
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-831830 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [9d755ac8-66a2-4dce-b17e-a70558170062] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [9d755ac8-66a2-4dce-b17e-a70558170062] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004305727s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-831830 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-081742 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [2fb74982-d3c9-4adb-8a31-897ff7ad9068] Pending
helpers_test.go:352: "busybox" [2fb74982-d3c9-4adb-8a31-897ff7ad9068] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [2fb74982-d3c9-4adb-8a31-897ff7ad9068] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004705148s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-081742 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (27.62s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-320923 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-320923 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (27.624060789s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (27.62s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-528248 -n embed-certs-528248
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-528248 -n embed-certs-528248: exit status 7 (98.501202ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-528248 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-831830 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-831830 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.037574231s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-831830 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (49.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-528248 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-528248 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (48.755349674s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-528248 -n embed-certs-528248
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (49.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-081742 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-081742 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.030695926s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-081742 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-831830 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-831830 --alsologtostderr -v=3: (12.040298133s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-081742 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-081742 --alsologtostderr -v=3: (12.008556427s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-831830 -n no-preload-831830
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-831830 -n no-preload-831830: exit status 7 (82.327282ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-831830 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-081742 -n default-k8s-diff-port-081742
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-081742 -n default-k8s-diff-port-081742: exit status 7 (95.911252ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-081742 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (47.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-831830 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-831830 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (47.076170296s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-831830 -n no-preload-831830
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (47.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (53.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-081742 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
E1008 14:40:25.288933  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/auto-774397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:40:25.295363  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/auto-774397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:40:25.306806  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/auto-774397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:40:25.328268  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/auto-774397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:40:25.369712  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/auto-774397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:40:25.451613  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/auto-774397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:40:25.613767  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/auto-774397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:40:25.935417  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/auto-774397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:40:26.576839  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/auto-774397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:40:26.847589  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/kindnet-774397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:40:26.854044  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/kindnet-774397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:40:26.865827  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/kindnet-774397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:40:26.887382  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/kindnet-774397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:40:26.929331  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/kindnet-774397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:40:27.011430  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/kindnet-774397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:40:27.173041  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/kindnet-774397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:40:27.494695  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/kindnet-774397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:40:27.858836  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/auto-774397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:40:28.136955  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/kindnet-774397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:40:29.418268  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/kindnet-774397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-081742 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (52.790446782s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-081742 -n default-k8s-diff-port-081742
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (53.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.65s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-320923 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1008 14:40:30.420432  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/auto-774397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-320923 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.650573018s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.65s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-320923 --alsologtostderr -v=3
E1008 14:40:31.979709  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/kindnet-774397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-320923 --alsologtostderr -v=3: (1.300885562s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-320923 -n newest-cni-320923
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-320923 -n newest-cni-320923: exit status 7 (89.837103ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-320923 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (11.78s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-320923 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
E1008 14:40:35.542370  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/auto-774397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:40:37.101640  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/kindnet-774397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-320923 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (11.326439818s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-320923 -n newest-cni-320923
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (11.78s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-320923 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-320923 --alsologtostderr -v=1
E1008 14:40:45.784593  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/auto-774397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-320923 -n newest-cni-320923
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-320923 -n newest-cni-320923: exit status 2 (380.116237ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-320923 -n newest-cni-320923
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-320923 -n newest-cni-320923: exit status 2 (385.909195ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-320923 --alsologtostderr -v=1
E1008 14:40:47.343663  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/kindnet-774397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-320923 -n newest-cni-320923
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-320923 -n newest-cni-320923
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-qppcj" [f13d6478-319c-436a-9d50-05d54836fdad] Running
E1008 14:41:01.658854  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/functional-686950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004357343s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-qppcj" [f13d6478-319c-436a-9d50-05d54836fdad] Running
E1008 14:41:06.266261  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/auto-774397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:41:07.826010  516787 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/kindnet-774397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004185657s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-528248 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-528248 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-528248 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-528248 -n embed-certs-528248
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-528248 -n embed-certs-528248: exit status 2 (322.776006ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-528248 -n embed-certs-528248
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-528248 -n embed-certs-528248: exit status 2 (327.123888ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-528248 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-528248 -n embed-certs-528248
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-528248 -n embed-certs-528248
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-58cjd" [4fddaa0a-7a8b-413e-8014-28f7e341abf2] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004157924s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-k5w24" [6fc2ca27-2012-40af-9398-0b1da6872712] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003406014s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-58cjd" [4fddaa0a-7a8b-413e-8014-28f7e341abf2] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00326546s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-831830 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-831830 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-831830 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-831830 -n no-preload-831830
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-831830 -n no-preload-831830: exit status 2 (315.074366ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-831830 -n no-preload-831830
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-831830 -n no-preload-831830: exit status 2 (321.3709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-831830 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-831830 -n no-preload-831830
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-831830 -n no-preload-831830
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.82s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-k5w24" [6fc2ca27-2012-40af-9398-0b1da6872712] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004571097s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-081742 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-081742 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.76s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-081742 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-081742 -n default-k8s-diff-port-081742
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-081742 -n default-k8s-diff-port-081742: exit status 2 (308.71283ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-081742 -n default-k8s-diff-port-081742
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-081742 -n default-k8s-diff-port-081742: exit status 2 (313.220766ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-081742 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-081742 -n default-k8s-diff-port-081742
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-081742 -n default-k8s-diff-port-081742
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.76s)

                                                
                                    

Test skip (25/332)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-774397 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-774397

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-774397

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-774397

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-774397

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-774397

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-774397

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-774397

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-774397

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-774397

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-774397

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-774397"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-774397"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-774397"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-774397

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-774397"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-774397"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-774397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-774397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-774397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-774397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-774397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-774397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-774397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-774397" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-774397"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-774397"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-774397"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-774397"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-774397"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-774397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-774397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-774397" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-774397"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-774397"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-774397"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-774397"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-774397"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-774397

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-774397"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-774397"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-774397"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-774397"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-774397"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-774397"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-774397"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-774397"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-774397"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-774397"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-774397"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-774397"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-774397"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-774397"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-774397"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-774397"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-774397"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-774397"

                                                
                                                
----------------------- debugLogs end: kubenet-774397 [took: 4.122525599s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-774397" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-774397
--- SKIP: TestNetworkPlugins/group/kubenet (4.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-774397 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-774397

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-774397

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-774397

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-774397

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-774397

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-774397

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-774397

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-774397

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-774397

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-774397

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-774397"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-774397"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-774397"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-774397

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-774397"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-774397"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-774397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-774397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-774397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-774397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-774397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-774397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-774397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-774397" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-774397"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-774397"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-774397"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-774397"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-774397"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-774397

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-774397

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-774397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-774397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-774397

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-774397

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-774397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-774397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-774397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-774397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-774397" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-774397"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-774397"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-774397"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-774397"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-774397"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-774397

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-774397"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-774397"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-774397"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-774397"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-774397"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-774397"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-774397"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-774397"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-774397"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-774397"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-774397"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-774397"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-774397"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-774397"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-774397"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-774397"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-774397"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-774397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-774397"

                                                
                                                
----------------------- debugLogs end: cilium-774397 [took: 4.022100513s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-774397" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-774397
--- SKIP: TestNetworkPlugins/group/cilium (4.21s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-832643" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-832643
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
Copied to clipboard