Test Report: KVM_Linux_crio 21508

                    
                      8932374f20a738e68cf28dc9e127463468f1eb30:2025-09-08:41334
                    
                

Test fail (6/324)

Order failed test Duration
37 TestAddons/parallel/Ingress 170.77
115 TestFunctional/parallel/ImageCommands/ImageListShort 2.27
118 TestFunctional/parallel/ImageCommands/ImageListYaml 2.32
119 TestFunctional/parallel/ImageCommands/ImageBuild 6.13
244 TestPreload 170.42
291 TestPause/serial/SecondStartNoReconfiguration 89.4
x
+
TestAddons/parallel/Ingress (170.77s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-674449 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-674449 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-674449 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [c90e9767-4367-4541-88a3-b800f3b971db] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [c90e9767-4367-4541-88a3-b800f3b971db] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 21.00752183s
I0908 13:39:53.652818 1120875 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-674449 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-674449 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m16.497963447s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-674449 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-674449 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.135
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-674449 -n addons-674449
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-674449 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-674449 logs -n 25: (1.702972135s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-419467                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-419467 │ jenkins │ v1.36.0 │ 08 Sep 25 13:35 UTC │ 08 Sep 25 13:35 UTC │
	│ start   │ --download-only -p binary-mirror-492864 --alsologtostderr --binary-mirror http://127.0.0.1:42959 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-492864 │ jenkins │ v1.36.0 │ 08 Sep 25 13:35 UTC │                     │
	│ delete  │ -p binary-mirror-492864                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-492864 │ jenkins │ v1.36.0 │ 08 Sep 25 13:35 UTC │ 08 Sep 25 13:35 UTC │
	│ addons  │ disable dashboard -p addons-674449                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-674449        │ jenkins │ v1.36.0 │ 08 Sep 25 13:35 UTC │                     │
	│ addons  │ enable dashboard -p addons-674449                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-674449        │ jenkins │ v1.36.0 │ 08 Sep 25 13:35 UTC │                     │
	│ start   │ -p addons-674449 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-674449        │ jenkins │ v1.36.0 │ 08 Sep 25 13:35 UTC │ 08 Sep 25 13:38 UTC │
	│ addons  │ addons-674449 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-674449        │ jenkins │ v1.36.0 │ 08 Sep 25 13:38 UTC │ 08 Sep 25 13:38 UTC │
	│ addons  │ addons-674449 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-674449        │ jenkins │ v1.36.0 │ 08 Sep 25 13:39 UTC │ 08 Sep 25 13:39 UTC │
	│ addons  │ addons-674449 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-674449        │ jenkins │ v1.36.0 │ 08 Sep 25 13:39 UTC │ 08 Sep 25 13:39 UTC │
	│ addons  │ addons-674449 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-674449        │ jenkins │ v1.36.0 │ 08 Sep 25 13:39 UTC │ 08 Sep 25 13:39 UTC │
	│ ssh     │ addons-674449 ssh cat /opt/local-path-provisioner/pvc-b826113c-f42b-42b7-85e8-1488c168911b_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                      │ addons-674449        │ jenkins │ v1.36.0 │ 08 Sep 25 13:39 UTC │ 08 Sep 25 13:39 UTC │
	│ addons  │ addons-674449 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                        │ addons-674449        │ jenkins │ v1.36.0 │ 08 Sep 25 13:39 UTC │ 08 Sep 25 13:39 UTC │
	│ addons  │ enable headlamp -p addons-674449 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-674449        │ jenkins │ v1.36.0 │ 08 Sep 25 13:39 UTC │ 08 Sep 25 13:39 UTC │
	│ addons  │ addons-674449 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-674449        │ jenkins │ v1.36.0 │ 08 Sep 25 13:39 UTC │ 08 Sep 25 13:39 UTC │
	│ ip      │ addons-674449 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-674449        │ jenkins │ v1.36.0 │ 08 Sep 25 13:39 UTC │ 08 Sep 25 13:39 UTC │
	│ addons  │ addons-674449 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-674449        │ jenkins │ v1.36.0 │ 08 Sep 25 13:39 UTC │ 08 Sep 25 13:39 UTC │
	│ addons  │ addons-674449 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-674449        │ jenkins │ v1.36.0 │ 08 Sep 25 13:39 UTC │ 08 Sep 25 13:39 UTC │
	│ addons  │ addons-674449 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-674449        │ jenkins │ v1.36.0 │ 08 Sep 25 13:39 UTC │ 08 Sep 25 13:39 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-674449                                                                                                                                                                                                                                                                                                                                                                                         │ addons-674449        │ jenkins │ v1.36.0 │ 08 Sep 25 13:39 UTC │ 08 Sep 25 13:39 UTC │
	│ addons  │ addons-674449 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-674449        │ jenkins │ v1.36.0 │ 08 Sep 25 13:39 UTC │ 08 Sep 25 13:39 UTC │
	│ addons  │ addons-674449 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-674449        │ jenkins │ v1.36.0 │ 08 Sep 25 13:39 UTC │ 08 Sep 25 13:39 UTC │
	│ ssh     │ addons-674449 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                               │ addons-674449        │ jenkins │ v1.36.0 │ 08 Sep 25 13:39 UTC │                     │
	│ addons  │ addons-674449 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-674449        │ jenkins │ v1.36.0 │ 08 Sep 25 13:40 UTC │ 08 Sep 25 13:40 UTC │
	│ addons  │ addons-674449 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-674449        │ jenkins │ v1.36.0 │ 08 Sep 25 13:40 UTC │ 08 Sep 25 13:40 UTC │
	│ ip      │ addons-674449 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-674449        │ jenkins │ v1.36.0 │ 08 Sep 25 13:42 UTC │ 08 Sep 25 13:42 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 13:35:24
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 13:35:24.033712 1121483 out.go:360] Setting OutFile to fd 1 ...
	I0908 13:35:24.034020 1121483 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:35:24.034032 1121483 out.go:374] Setting ErrFile to fd 2...
	I0908 13:35:24.034038 1121483 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:35:24.034239 1121483 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-1116714/.minikube/bin
	I0908 13:35:24.034902 1121483 out.go:368] Setting JSON to false
	I0908 13:35:24.035908 1121483 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":15468,"bootTime":1757323056,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 13:35:24.036029 1121483 start.go:140] virtualization: kvm guest
	I0908 13:35:24.038221 1121483 out.go:179] * [addons-674449] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0908 13:35:24.039911 1121483 out.go:179]   - MINIKUBE_LOCATION=21508
	I0908 13:35:24.039971 1121483 notify.go:220] Checking for updates...
	I0908 13:35:24.042827 1121483 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 13:35:24.044219 1121483 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21508-1116714/kubeconfig
	I0908 13:35:24.045371 1121483 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-1116714/.minikube
	I0908 13:35:24.046503 1121483 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 13:35:24.047736 1121483 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 13:35:24.049117 1121483 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 13:35:24.084004 1121483 out.go:179] * Using the kvm2 driver based on user configuration
	I0908 13:35:24.085145 1121483 start.go:304] selected driver: kvm2
	I0908 13:35:24.085164 1121483 start.go:918] validating driver "kvm2" against <nil>
	I0908 13:35:24.085181 1121483 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 13:35:24.086290 1121483 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 13:35:24.086389 1121483 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21508-1116714/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0908 13:35:24.103131 1121483 install.go:137] /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0908 13:35:24.103187 1121483 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0908 13:35:24.103478 1121483 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 13:35:24.103520 1121483 cni.go:84] Creating CNI manager for ""
	I0908 13:35:24.103565 1121483 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0908 13:35:24.103577 1121483 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0908 13:35:24.103635 1121483 start.go:348] cluster config:
	{Name:addons-674449 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-674449 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I0908 13:35:24.103772 1121483 iso.go:125] acquiring lock: {Name:mk347390bf24761f2c39bf4cd5b718f157a50faa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 13:35:24.105776 1121483 out.go:179] * Starting "addons-674449" primary control-plane node in "addons-674449" cluster
	I0908 13:35:24.107021 1121483 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 13:35:24.107139 1121483 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0908 13:35:24.107154 1121483 cache.go:58] Caching tarball of preloaded images
	I0908 13:35:24.107253 1121483 preload.go:172] Found /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0908 13:35:24.107264 1121483 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0908 13:35:24.107614 1121483 profile.go:143] Saving config to /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/addons-674449/config.json ...
	I0908 13:35:24.107645 1121483 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/addons-674449/config.json: {Name:mk99913511e1c8c5deee5e1f2a9598e815355d6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:35:24.107872 1121483 start.go:360] acquireMachinesLock for addons-674449: {Name:mk0626ae9b324aeb819357e3de70b05b9e4c30a5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0908 13:35:24.107924 1121483 start.go:364] duration metric: took 36.15µs to acquireMachinesLock for "addons-674449"
	I0908 13:35:24.107942 1121483 start.go:93] Provisioning new machine with config: &{Name:addons-674449 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.0 ClusterName:addons-674449 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0908 13:35:24.108005 1121483 start.go:125] createHost starting for "" (driver="kvm2")
	I0908 13:35:24.110447 1121483 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0908 13:35:24.110637 1121483 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 13:35:24.110690 1121483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 13:35:24.126704 1121483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33201
	I0908 13:35:24.127225 1121483 main.go:141] libmachine: () Calling .GetVersion
	I0908 13:35:24.127823 1121483 main.go:141] libmachine: Using API Version  1
	I0908 13:35:24.127847 1121483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 13:35:24.128311 1121483 main.go:141] libmachine: () Calling .GetMachineName
	I0908 13:35:24.128533 1121483 main.go:141] libmachine: (addons-674449) Calling .GetMachineName
	I0908 13:35:24.128688 1121483 main.go:141] libmachine: (addons-674449) Calling .DriverName
	I0908 13:35:24.128831 1121483 start.go:159] libmachine.API.Create for "addons-674449" (driver="kvm2")
	I0908 13:35:24.128866 1121483 client.go:168] LocalClient.Create starting
	I0908 13:35:24.128916 1121483 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca.pem
	I0908 13:35:24.165436 1121483 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/cert.pem
	I0908 13:35:24.339893 1121483 main.go:141] libmachine: Running pre-create checks...
	I0908 13:35:24.339924 1121483 main.go:141] libmachine: (addons-674449) Calling .PreCreateCheck
	I0908 13:35:24.340531 1121483 main.go:141] libmachine: (addons-674449) Calling .GetConfigRaw
	I0908 13:35:24.341000 1121483 main.go:141] libmachine: Creating machine...
	I0908 13:35:24.341015 1121483 main.go:141] libmachine: (addons-674449) Calling .Create
	I0908 13:35:24.341189 1121483 main.go:141] libmachine: (addons-674449) creating KVM machine...
	I0908 13:35:24.341214 1121483 main.go:141] libmachine: (addons-674449) creating network...
	I0908 13:35:24.342812 1121483 main.go:141] libmachine: (addons-674449) DBG | found existing default KVM network
	I0908 13:35:24.343663 1121483 main.go:141] libmachine: (addons-674449) DBG | I0908 13:35:24.343472 1121505 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123560}
	I0908 13:35:24.343726 1121483 main.go:141] libmachine: (addons-674449) DBG | created network xml: 
	I0908 13:35:24.343759 1121483 main.go:141] libmachine: (addons-674449) DBG | <network>
	I0908 13:35:24.343767 1121483 main.go:141] libmachine: (addons-674449) DBG |   <name>mk-addons-674449</name>
	I0908 13:35:24.343777 1121483 main.go:141] libmachine: (addons-674449) DBG |   <dns enable='no'/>
	I0908 13:35:24.343786 1121483 main.go:141] libmachine: (addons-674449) DBG |   
	I0908 13:35:24.343794 1121483 main.go:141] libmachine: (addons-674449) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0908 13:35:24.343803 1121483 main.go:141] libmachine: (addons-674449) DBG |     <dhcp>
	I0908 13:35:24.343821 1121483 main.go:141] libmachine: (addons-674449) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0908 13:35:24.343862 1121483 main.go:141] libmachine: (addons-674449) DBG |     </dhcp>
	I0908 13:35:24.343886 1121483 main.go:141] libmachine: (addons-674449) DBG |   </ip>
	I0908 13:35:24.343897 1121483 main.go:141] libmachine: (addons-674449) DBG |   
	I0908 13:35:24.343908 1121483 main.go:141] libmachine: (addons-674449) DBG | </network>
	I0908 13:35:24.343938 1121483 main.go:141] libmachine: (addons-674449) DBG | 
	I0908 13:35:24.349998 1121483 main.go:141] libmachine: (addons-674449) DBG | trying to create private KVM network mk-addons-674449 192.168.39.0/24...
	I0908 13:35:24.426519 1121483 main.go:141] libmachine: (addons-674449) DBG | private KVM network mk-addons-674449 192.168.39.0/24 created
	I0908 13:35:24.426567 1121483 main.go:141] libmachine: (addons-674449) DBG | I0908 13:35:24.426468 1121505 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21508-1116714/.minikube
	I0908 13:35:24.426592 1121483 main.go:141] libmachine: (addons-674449) setting up store path in /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/addons-674449 ...
	I0908 13:35:24.426612 1121483 main.go:141] libmachine: (addons-674449) building disk image from file:///home/jenkins/minikube-integration/21508-1116714/.minikube/cache/iso/amd64/minikube-v1.36.0-1756980912-21488-amd64.iso
	I0908 13:35:24.426632 1121483 main.go:141] libmachine: (addons-674449) Downloading /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21508-1116714/.minikube/cache/iso/amd64/minikube-v1.36.0-1756980912-21488-amd64.iso...
	I0908 13:35:24.753162 1121483 main.go:141] libmachine: (addons-674449) DBG | I0908 13:35:24.752940 1121505 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/addons-674449/id_rsa...
	I0908 13:35:24.806674 1121483 main.go:141] libmachine: (addons-674449) DBG | I0908 13:35:24.806465 1121505 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/addons-674449/addons-674449.rawdisk...
	I0908 13:35:24.806712 1121483 main.go:141] libmachine: (addons-674449) DBG | Writing magic tar header
	I0908 13:35:24.806727 1121483 main.go:141] libmachine: (addons-674449) setting executable bit set on /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/addons-674449 (perms=drwx------)
	I0908 13:35:24.806742 1121483 main.go:141] libmachine: (addons-674449) setting executable bit set on /home/jenkins/minikube-integration/21508-1116714/.minikube/machines (perms=drwxr-xr-x)
	I0908 13:35:24.806749 1121483 main.go:141] libmachine: (addons-674449) setting executable bit set on /home/jenkins/minikube-integration/21508-1116714/.minikube (perms=drwxr-xr-x)
	I0908 13:35:24.806760 1121483 main.go:141] libmachine: (addons-674449) setting executable bit set on /home/jenkins/minikube-integration/21508-1116714 (perms=drwxrwxr-x)
	I0908 13:35:24.806766 1121483 main.go:141] libmachine: (addons-674449) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0908 13:35:24.806774 1121483 main.go:141] libmachine: (addons-674449) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0908 13:35:24.806778 1121483 main.go:141] libmachine: (addons-674449) creating domain...
	I0908 13:35:24.806816 1121483 main.go:141] libmachine: (addons-674449) DBG | Writing SSH key tar header
	I0908 13:35:24.806845 1121483 main.go:141] libmachine: (addons-674449) DBG | I0908 13:35:24.806585 1121505 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/addons-674449 ...
	I0908 13:35:24.806880 1121483 main.go:141] libmachine: (addons-674449) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/addons-674449
	I0908 13:35:24.806898 1121483 main.go:141] libmachine: (addons-674449) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21508-1116714/.minikube/machines
	I0908 13:35:24.806911 1121483 main.go:141] libmachine: (addons-674449) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21508-1116714/.minikube
	I0908 13:35:24.806924 1121483 main.go:141] libmachine: (addons-674449) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21508-1116714
	I0908 13:35:24.806938 1121483 main.go:141] libmachine: (addons-674449) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0908 13:35:24.806950 1121483 main.go:141] libmachine: (addons-674449) DBG | checking permissions on dir: /home/jenkins
	I0908 13:35:24.806963 1121483 main.go:141] libmachine: (addons-674449) DBG | checking permissions on dir: /home
	I0908 13:35:24.806973 1121483 main.go:141] libmachine: (addons-674449) DBG | skipping /home - not owner
	I0908 13:35:24.808113 1121483 main.go:141] libmachine: (addons-674449) define libvirt domain using xml: 
	I0908 13:35:24.808135 1121483 main.go:141] libmachine: (addons-674449) <domain type='kvm'>
	I0908 13:35:24.808144 1121483 main.go:141] libmachine: (addons-674449)   <name>addons-674449</name>
	I0908 13:35:24.808151 1121483 main.go:141] libmachine: (addons-674449)   <memory unit='MiB'>4096</memory>
	I0908 13:35:24.808158 1121483 main.go:141] libmachine: (addons-674449)   <vcpu>2</vcpu>
	I0908 13:35:24.808164 1121483 main.go:141] libmachine: (addons-674449)   <features>
	I0908 13:35:24.808172 1121483 main.go:141] libmachine: (addons-674449)     <acpi/>
	I0908 13:35:24.808181 1121483 main.go:141] libmachine: (addons-674449)     <apic/>
	I0908 13:35:24.808190 1121483 main.go:141] libmachine: (addons-674449)     <pae/>
	I0908 13:35:24.808213 1121483 main.go:141] libmachine: (addons-674449)     
	I0908 13:35:24.808257 1121483 main.go:141] libmachine: (addons-674449)   </features>
	I0908 13:35:24.808281 1121483 main.go:141] libmachine: (addons-674449)   <cpu mode='host-passthrough'>
	I0908 13:35:24.808322 1121483 main.go:141] libmachine: (addons-674449)   
	I0908 13:35:24.808341 1121483 main.go:141] libmachine: (addons-674449)   </cpu>
	I0908 13:35:24.808386 1121483 main.go:141] libmachine: (addons-674449)   <os>
	I0908 13:35:24.808424 1121483 main.go:141] libmachine: (addons-674449)     <type>hvm</type>
	I0908 13:35:24.808437 1121483 main.go:141] libmachine: (addons-674449)     <boot dev='cdrom'/>
	I0908 13:35:24.808449 1121483 main.go:141] libmachine: (addons-674449)     <boot dev='hd'/>
	I0908 13:35:24.808474 1121483 main.go:141] libmachine: (addons-674449)     <bootmenu enable='no'/>
	I0908 13:35:24.808489 1121483 main.go:141] libmachine: (addons-674449)   </os>
	I0908 13:35:24.808513 1121483 main.go:141] libmachine: (addons-674449)   <devices>
	I0908 13:35:24.808533 1121483 main.go:141] libmachine: (addons-674449)     <disk type='file' device='cdrom'>
	I0908 13:35:24.808552 1121483 main.go:141] libmachine: (addons-674449)       <source file='/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/addons-674449/boot2docker.iso'/>
	I0908 13:35:24.808562 1121483 main.go:141] libmachine: (addons-674449)       <target dev='hdc' bus='scsi'/>
	I0908 13:35:24.808574 1121483 main.go:141] libmachine: (addons-674449)       <readonly/>
	I0908 13:35:24.808583 1121483 main.go:141] libmachine: (addons-674449)     </disk>
	I0908 13:35:24.808593 1121483 main.go:141] libmachine: (addons-674449)     <disk type='file' device='disk'>
	I0908 13:35:24.808603 1121483 main.go:141] libmachine: (addons-674449)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0908 13:35:24.808611 1121483 main.go:141] libmachine: (addons-674449)       <source file='/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/addons-674449/addons-674449.rawdisk'/>
	I0908 13:35:24.808624 1121483 main.go:141] libmachine: (addons-674449)       <target dev='hda' bus='virtio'/>
	I0908 13:35:24.808634 1121483 main.go:141] libmachine: (addons-674449)     </disk>
	I0908 13:35:24.808694 1121483 main.go:141] libmachine: (addons-674449)     <interface type='network'>
	I0908 13:35:24.808711 1121483 main.go:141] libmachine: (addons-674449)       <source network='mk-addons-674449'/>
	I0908 13:35:24.808728 1121483 main.go:141] libmachine: (addons-674449)       <model type='virtio'/>
	I0908 13:35:24.808745 1121483 main.go:141] libmachine: (addons-674449)     </interface>
	I0908 13:35:24.808758 1121483 main.go:141] libmachine: (addons-674449)     <interface type='network'>
	I0908 13:35:24.808769 1121483 main.go:141] libmachine: (addons-674449)       <source network='default'/>
	I0908 13:35:24.808778 1121483 main.go:141] libmachine: (addons-674449)       <model type='virtio'/>
	I0908 13:35:24.808787 1121483 main.go:141] libmachine: (addons-674449)     </interface>
	I0908 13:35:24.808795 1121483 main.go:141] libmachine: (addons-674449)     <serial type='pty'>
	I0908 13:35:24.808806 1121483 main.go:141] libmachine: (addons-674449)       <target port='0'/>
	I0908 13:35:24.808824 1121483 main.go:141] libmachine: (addons-674449)     </serial>
	I0908 13:35:24.808840 1121483 main.go:141] libmachine: (addons-674449)     <console type='pty'>
	I0908 13:35:24.808857 1121483 main.go:141] libmachine: (addons-674449)       <target type='serial' port='0'/>
	I0908 13:35:24.808873 1121483 main.go:141] libmachine: (addons-674449)     </console>
	I0908 13:35:24.808885 1121483 main.go:141] libmachine: (addons-674449)     <rng model='virtio'>
	I0908 13:35:24.808897 1121483 main.go:141] libmachine: (addons-674449)       <backend model='random'>/dev/random</backend>
	I0908 13:35:24.808906 1121483 main.go:141] libmachine: (addons-674449)     </rng>
	I0908 13:35:24.808912 1121483 main.go:141] libmachine: (addons-674449)     
	I0908 13:35:24.808923 1121483 main.go:141] libmachine: (addons-674449)     
	I0908 13:35:24.808933 1121483 main.go:141] libmachine: (addons-674449)   </devices>
	I0908 13:35:24.808947 1121483 main.go:141] libmachine: (addons-674449) </domain>
	I0908 13:35:24.808963 1121483 main.go:141] libmachine: (addons-674449) 
	I0908 13:35:24.813403 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined MAC address 52:54:00:e6:9e:0c in network default
	I0908 13:35:24.814287 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:35:24.814313 1121483 main.go:141] libmachine: (addons-674449) starting domain...
	I0908 13:35:24.814327 1121483 main.go:141] libmachine: (addons-674449) ensuring networks are active...
	I0908 13:35:24.815401 1121483 main.go:141] libmachine: (addons-674449) Ensuring network default is active
	I0908 13:35:24.815788 1121483 main.go:141] libmachine: (addons-674449) Ensuring network mk-addons-674449 is active
	I0908 13:35:24.816410 1121483 main.go:141] libmachine: (addons-674449) getting domain XML...
	I0908 13:35:24.817220 1121483 main.go:141] libmachine: (addons-674449) creating domain...
	I0908 13:35:25.198012 1121483 main.go:141] libmachine: (addons-674449) waiting for IP...
	I0908 13:35:25.198876 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:35:25.199295 1121483 main.go:141] libmachine: (addons-674449) DBG | unable to find current IP address of domain addons-674449 in network mk-addons-674449
	I0908 13:35:25.199370 1121483 main.go:141] libmachine: (addons-674449) DBG | I0908 13:35:25.199294 1121505 retry.go:31] will retry after 244.804172ms: waiting for domain to come up
	I0908 13:35:25.446026 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:35:25.446688 1121483 main.go:141] libmachine: (addons-674449) DBG | unable to find current IP address of domain addons-674449 in network mk-addons-674449
	I0908 13:35:25.446719 1121483 main.go:141] libmachine: (addons-674449) DBG | I0908 13:35:25.446635 1121505 retry.go:31] will retry after 261.30065ms: waiting for domain to come up
	I0908 13:35:25.709245 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:35:25.709722 1121483 main.go:141] libmachine: (addons-674449) DBG | unable to find current IP address of domain addons-674449 in network mk-addons-674449
	I0908 13:35:25.709774 1121483 main.go:141] libmachine: (addons-674449) DBG | I0908 13:35:25.709697 1121505 retry.go:31] will retry after 358.844866ms: waiting for domain to come up
	I0908 13:35:26.070473 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:35:26.071030 1121483 main.go:141] libmachine: (addons-674449) DBG | unable to find current IP address of domain addons-674449 in network mk-addons-674449
	I0908 13:35:26.071104 1121483 main.go:141] libmachine: (addons-674449) DBG | I0908 13:35:26.070976 1121505 retry.go:31] will retry after 396.002476ms: waiting for domain to come up
	I0908 13:35:26.468768 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:35:26.469262 1121483 main.go:141] libmachine: (addons-674449) DBG | unable to find current IP address of domain addons-674449 in network mk-addons-674449
	I0908 13:35:26.469301 1121483 main.go:141] libmachine: (addons-674449) DBG | I0908 13:35:26.469208 1121505 retry.go:31] will retry after 533.329974ms: waiting for domain to come up
	I0908 13:35:27.003968 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:35:27.004422 1121483 main.go:141] libmachine: (addons-674449) DBG | unable to find current IP address of domain addons-674449 in network mk-addons-674449
	I0908 13:35:27.004510 1121483 main.go:141] libmachine: (addons-674449) DBG | I0908 13:35:27.004410 1121505 retry.go:31] will retry after 746.298555ms: waiting for domain to come up
	I0908 13:35:27.752026 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:35:27.752437 1121483 main.go:141] libmachine: (addons-674449) DBG | unable to find current IP address of domain addons-674449 in network mk-addons-674449
	I0908 13:35:27.752461 1121483 main.go:141] libmachine: (addons-674449) DBG | I0908 13:35:27.752409 1121505 retry.go:31] will retry after 1.18905366s: waiting for domain to come up
	I0908 13:35:28.943824 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:35:28.944278 1121483 main.go:141] libmachine: (addons-674449) DBG | unable to find current IP address of domain addons-674449 in network mk-addons-674449
	I0908 13:35:28.944326 1121483 main.go:141] libmachine: (addons-674449) DBG | I0908 13:35:28.944246 1121505 retry.go:31] will retry after 1.127343545s: waiting for domain to come up
	I0908 13:35:30.073468 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:35:30.073975 1121483 main.go:141] libmachine: (addons-674449) DBG | unable to find current IP address of domain addons-674449 in network mk-addons-674449
	I0908 13:35:30.074010 1121483 main.go:141] libmachine: (addons-674449) DBG | I0908 13:35:30.073931 1121505 retry.go:31] will retry after 1.181546314s: waiting for domain to come up
	I0908 13:35:31.257401 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:35:31.257726 1121483 main.go:141] libmachine: (addons-674449) DBG | unable to find current IP address of domain addons-674449 in network mk-addons-674449
	I0908 13:35:31.257765 1121483 main.go:141] libmachine: (addons-674449) DBG | I0908 13:35:31.257694 1121505 retry.go:31] will retry after 2.141919311s: waiting for domain to come up
	I0908 13:35:33.402371 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:35:33.402960 1121483 main.go:141] libmachine: (addons-674449) DBG | unable to find current IP address of domain addons-674449 in network mk-addons-674449
	I0908 13:35:33.402983 1121483 main.go:141] libmachine: (addons-674449) DBG | I0908 13:35:33.402921 1121505 retry.go:31] will retry after 2.765457639s: waiting for domain to come up
	I0908 13:35:36.169717 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:35:36.170206 1121483 main.go:141] libmachine: (addons-674449) DBG | unable to find current IP address of domain addons-674449 in network mk-addons-674449
	I0908 13:35:36.170243 1121483 main.go:141] libmachine: (addons-674449) DBG | I0908 13:35:36.170131 1121505 retry.go:31] will retry after 3.223901692s: waiting for domain to come up
	I0908 13:35:39.395671 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:35:39.396039 1121483 main.go:141] libmachine: (addons-674449) DBG | unable to find current IP address of domain addons-674449 in network mk-addons-674449
	I0908 13:35:39.396124 1121483 main.go:141] libmachine: (addons-674449) DBG | I0908 13:35:39.396045 1121505 retry.go:31] will retry after 3.770962168s: waiting for domain to come up
	I0908 13:35:43.168425 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:35:43.168914 1121483 main.go:141] libmachine: (addons-674449) DBG | unable to find current IP address of domain addons-674449 in network mk-addons-674449
	I0908 13:35:43.168946 1121483 main.go:141] libmachine: (addons-674449) DBG | I0908 13:35:43.168884 1121505 retry.go:31] will retry after 4.687519155s: waiting for domain to come up
	I0908 13:35:47.858630 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:35:47.859219 1121483 main.go:141] libmachine: (addons-674449) found domain IP: 192.168.39.135
	I0908 13:35:47.859242 1121483 main.go:141] libmachine: (addons-674449) reserving static IP address...
	I0908 13:35:47.859251 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has current primary IP address 192.168.39.135 and MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:35:47.859733 1121483 main.go:141] libmachine: (addons-674449) DBG | unable to find host DHCP lease matching {name: "addons-674449", mac: "52:54:00:7c:26:15", ip: "192.168.39.135"} in network mk-addons-674449
	I0908 13:35:47.951625 1121483 main.go:141] libmachine: (addons-674449) reserved static IP address 192.168.39.135 for domain addons-674449
	I0908 13:35:47.951669 1121483 main.go:141] libmachine: (addons-674449) waiting for SSH...
	I0908 13:35:47.951714 1121483 main.go:141] libmachine: (addons-674449) DBG | Getting to WaitForSSH function...
	I0908 13:35:47.954704 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:35:47.955115 1121483 main.go:141] libmachine: (addons-674449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:26:15", ip: ""} in network mk-addons-674449: {Iface:virbr1 ExpiryTime:2025-09-08 14:35:39 +0000 UTC Type:0 Mac:52:54:00:7c:26:15 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:minikube Clientid:01:52:54:00:7c:26:15}
	I0908 13:35:47.955148 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined IP address 192.168.39.135 and MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:35:47.955384 1121483 main.go:141] libmachine: (addons-674449) DBG | Using SSH client type: external
	I0908 13:35:47.955441 1121483 main.go:141] libmachine: (addons-674449) DBG | Using SSH private key: /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/addons-674449/id_rsa (-rw-------)
	I0908 13:35:47.955484 1121483 main.go:141] libmachine: (addons-674449) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.135 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/addons-674449/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0908 13:35:47.955505 1121483 main.go:141] libmachine: (addons-674449) DBG | About to run SSH command:
	I0908 13:35:47.955515 1121483 main.go:141] libmachine: (addons-674449) DBG | exit 0
	I0908 13:35:48.080519 1121483 main.go:141] libmachine: (addons-674449) DBG | SSH cmd err, output: <nil>: 
	I0908 13:35:48.080870 1121483 main.go:141] libmachine: (addons-674449) KVM machine creation complete
	I0908 13:35:48.081155 1121483 main.go:141] libmachine: (addons-674449) Calling .GetConfigRaw
	I0908 13:35:48.081802 1121483 main.go:141] libmachine: (addons-674449) Calling .DriverName
	I0908 13:35:48.082040 1121483 main.go:141] libmachine: (addons-674449) Calling .DriverName
	I0908 13:35:48.082244 1121483 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0908 13:35:48.082267 1121483 main.go:141] libmachine: (addons-674449) Calling .GetState
	I0908 13:35:48.083810 1121483 main.go:141] libmachine: Detecting operating system of created instance...
	I0908 13:35:48.083834 1121483 main.go:141] libmachine: Waiting for SSH to be available...
	I0908 13:35:48.083842 1121483 main.go:141] libmachine: Getting to WaitForSSH function...
	I0908 13:35:48.083851 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHHostname
	I0908 13:35:48.086326 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:35:48.086692 1121483 main.go:141] libmachine: (addons-674449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:26:15", ip: ""} in network mk-addons-674449: {Iface:virbr1 ExpiryTime:2025-09-08 14:35:39 +0000 UTC Type:0 Mac:52:54:00:7c:26:15 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-674449 Clientid:01:52:54:00:7c:26:15}
	I0908 13:35:48.086720 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined IP address 192.168.39.135 and MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:35:48.086867 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHPort
	I0908 13:35:48.087069 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHKeyPath
	I0908 13:35:48.087244 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHKeyPath
	I0908 13:35:48.087382 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHUsername
	I0908 13:35:48.087620 1121483 main.go:141] libmachine: Using SSH client type: native
	I0908 13:35:48.087931 1121483 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I0908 13:35:48.087945 1121483 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0908 13:35:48.195540 1121483 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 13:35:48.195568 1121483 main.go:141] libmachine: Detecting the provisioner...
	I0908 13:35:48.195580 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHHostname
	I0908 13:35:48.198536 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:35:48.198903 1121483 main.go:141] libmachine: (addons-674449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:26:15", ip: ""} in network mk-addons-674449: {Iface:virbr1 ExpiryTime:2025-09-08 14:35:39 +0000 UTC Type:0 Mac:52:54:00:7c:26:15 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-674449 Clientid:01:52:54:00:7c:26:15}
	I0908 13:35:48.198939 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined IP address 192.168.39.135 and MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:35:48.199076 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHPort
	I0908 13:35:48.199256 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHKeyPath
	I0908 13:35:48.199392 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHKeyPath
	I0908 13:35:48.199493 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHUsername
	I0908 13:35:48.199734 1121483 main.go:141] libmachine: Using SSH client type: native
	I0908 13:35:48.200033 1121483 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I0908 13:35:48.200052 1121483 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0908 13:35:48.305511 1121483 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I0908 13:35:48.305596 1121483 main.go:141] libmachine: found compatible host: buildroot
	I0908 13:35:48.305605 1121483 main.go:141] libmachine: Provisioning with buildroot...
	I0908 13:35:48.305614 1121483 main.go:141] libmachine: (addons-674449) Calling .GetMachineName
	I0908 13:35:48.305900 1121483 buildroot.go:166] provisioning hostname "addons-674449"
	I0908 13:35:48.305935 1121483 main.go:141] libmachine: (addons-674449) Calling .GetMachineName
	I0908 13:35:48.306160 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHHostname
	I0908 13:35:48.308852 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:35:48.309209 1121483 main.go:141] libmachine: (addons-674449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:26:15", ip: ""} in network mk-addons-674449: {Iface:virbr1 ExpiryTime:2025-09-08 14:35:39 +0000 UTC Type:0 Mac:52:54:00:7c:26:15 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-674449 Clientid:01:52:54:00:7c:26:15}
	I0908 13:35:48.309234 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined IP address 192.168.39.135 and MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:35:48.309369 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHPort
	I0908 13:35:48.309581 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHKeyPath
	I0908 13:35:48.309730 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHKeyPath
	I0908 13:35:48.309859 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHUsername
	I0908 13:35:48.310039 1121483 main.go:141] libmachine: Using SSH client type: native
	I0908 13:35:48.310314 1121483 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I0908 13:35:48.310328 1121483 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-674449 && echo "addons-674449" | sudo tee /etc/hostname
	I0908 13:35:48.435306 1121483 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-674449
	
	I0908 13:35:48.435350 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHHostname
	I0908 13:35:48.438180 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:35:48.438529 1121483 main.go:141] libmachine: (addons-674449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:26:15", ip: ""} in network mk-addons-674449: {Iface:virbr1 ExpiryTime:2025-09-08 14:35:39 +0000 UTC Type:0 Mac:52:54:00:7c:26:15 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-674449 Clientid:01:52:54:00:7c:26:15}
	I0908 13:35:48.438560 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined IP address 192.168.39.135 and MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:35:48.438757 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHPort
	I0908 13:35:48.438963 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHKeyPath
	I0908 13:35:48.439138 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHKeyPath
	I0908 13:35:48.439335 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHUsername
	I0908 13:35:48.439532 1121483 main.go:141] libmachine: Using SSH client type: native
	I0908 13:35:48.439848 1121483 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I0908 13:35:48.439872 1121483 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-674449' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-674449/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-674449' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 13:35:48.556536 1121483 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 13:35:48.556589 1121483 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21508-1116714/.minikube CaCertPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21508-1116714/.minikube}
	I0908 13:35:48.556609 1121483 buildroot.go:174] setting up certificates
	I0908 13:35:48.556633 1121483 provision.go:84] configureAuth start
	I0908 13:35:48.556646 1121483 main.go:141] libmachine: (addons-674449) Calling .GetMachineName
	I0908 13:35:48.557022 1121483 main.go:141] libmachine: (addons-674449) Calling .GetIP
	I0908 13:35:48.560027 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:35:48.560476 1121483 main.go:141] libmachine: (addons-674449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:26:15", ip: ""} in network mk-addons-674449: {Iface:virbr1 ExpiryTime:2025-09-08 14:35:39 +0000 UTC Type:0 Mac:52:54:00:7c:26:15 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-674449 Clientid:01:52:54:00:7c:26:15}
	I0908 13:35:48.560503 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined IP address 192.168.39.135 and MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:35:48.560664 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHHostname
	I0908 13:35:48.563395 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:35:48.563810 1121483 main.go:141] libmachine: (addons-674449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:26:15", ip: ""} in network mk-addons-674449: {Iface:virbr1 ExpiryTime:2025-09-08 14:35:39 +0000 UTC Type:0 Mac:52:54:00:7c:26:15 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-674449 Clientid:01:52:54:00:7c:26:15}
	I0908 13:35:48.563841 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined IP address 192.168.39.135 and MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:35:48.564046 1121483 provision.go:143] copyHostCerts
	I0908 13:35:48.564144 1121483 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21508-1116714/.minikube/cert.pem (1123 bytes)
	I0908 13:35:48.564264 1121483 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21508-1116714/.minikube/key.pem (1675 bytes)
	I0908 13:35:48.564323 1121483 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21508-1116714/.minikube/ca.pem (1082 bytes)
	I0908 13:35:48.564372 1121483 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca-key.pem org=jenkins.addons-674449 san=[127.0.0.1 192.168.39.135 addons-674449 localhost minikube]
	I0908 13:35:48.821191 1121483 provision.go:177] copyRemoteCerts
	I0908 13:35:48.821265 1121483 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 13:35:48.821297 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHHostname
	I0908 13:35:48.823904 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:35:48.824307 1121483 main.go:141] libmachine: (addons-674449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:26:15", ip: ""} in network mk-addons-674449: {Iface:virbr1 ExpiryTime:2025-09-08 14:35:39 +0000 UTC Type:0 Mac:52:54:00:7c:26:15 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-674449 Clientid:01:52:54:00:7c:26:15}
	I0908 13:35:48.824341 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined IP address 192.168.39.135 and MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:35:48.824529 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHPort
	I0908 13:35:48.824794 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHKeyPath
	I0908 13:35:48.824976 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHUsername
	I0908 13:35:48.825143 1121483 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/addons-674449/id_rsa Username:docker}
	I0908 13:35:48.912945 1121483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0908 13:35:48.943773 1121483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0908 13:35:48.976228 1121483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0908 13:35:49.011360 1121483 provision.go:87] duration metric: took 454.70911ms to configureAuth
	I0908 13:35:49.011403 1121483 buildroot.go:189] setting minikube options for container-runtime
	I0908 13:35:49.011611 1121483 config.go:182] Loaded profile config "addons-674449": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 13:35:49.011763 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHHostname
	I0908 13:35:49.015353 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:35:49.015885 1121483 main.go:141] libmachine: (addons-674449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:26:15", ip: ""} in network mk-addons-674449: {Iface:virbr1 ExpiryTime:2025-09-08 14:35:39 +0000 UTC Type:0 Mac:52:54:00:7c:26:15 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-674449 Clientid:01:52:54:00:7c:26:15}
	I0908 13:35:49.015925 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined IP address 192.168.39.135 and MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:35:49.016215 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHPort
	I0908 13:35:49.016507 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHKeyPath
	I0908 13:35:49.016725 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHKeyPath
	I0908 13:35:49.016958 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHUsername
	I0908 13:35:49.017254 1121483 main.go:141] libmachine: Using SSH client type: native
	I0908 13:35:49.017486 1121483 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I0908 13:35:49.017501 1121483 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0908 13:35:49.270583 1121483 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0908 13:35:49.270631 1121483 main.go:141] libmachine: Checking connection to Docker...
	I0908 13:35:49.270646 1121483 main.go:141] libmachine: (addons-674449) Calling .GetURL
	I0908 13:35:49.272168 1121483 main.go:141] libmachine: (addons-674449) DBG | using libvirt version 6000000
	I0908 13:35:49.274921 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:35:49.275300 1121483 main.go:141] libmachine: (addons-674449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:26:15", ip: ""} in network mk-addons-674449: {Iface:virbr1 ExpiryTime:2025-09-08 14:35:39 +0000 UTC Type:0 Mac:52:54:00:7c:26:15 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-674449 Clientid:01:52:54:00:7c:26:15}
	I0908 13:35:49.275336 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined IP address 192.168.39.135 and MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:35:49.275536 1121483 main.go:141] libmachine: Docker is up and running!
	I0908 13:35:49.275554 1121483 main.go:141] libmachine: Reticulating splines...
	I0908 13:35:49.275568 1121483 client.go:171] duration metric: took 25.14668615s to LocalClient.Create
	I0908 13:35:49.275594 1121483 start.go:167] duration metric: took 25.146775455s to libmachine.API.Create "addons-674449"
	I0908 13:35:49.275605 1121483 start.go:293] postStartSetup for "addons-674449" (driver="kvm2")
	I0908 13:35:49.275615 1121483 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 13:35:49.275636 1121483 main.go:141] libmachine: (addons-674449) Calling .DriverName
	I0908 13:35:49.275965 1121483 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 13:35:49.276000 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHHostname
	I0908 13:35:49.278312 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:35:49.278613 1121483 main.go:141] libmachine: (addons-674449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:26:15", ip: ""} in network mk-addons-674449: {Iface:virbr1 ExpiryTime:2025-09-08 14:35:39 +0000 UTC Type:0 Mac:52:54:00:7c:26:15 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-674449 Clientid:01:52:54:00:7c:26:15}
	I0908 13:35:49.278640 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined IP address 192.168.39.135 and MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:35:49.278818 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHPort
	I0908 13:35:49.279112 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHKeyPath
	I0908 13:35:49.279375 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHUsername
	I0908 13:35:49.279588 1121483 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/addons-674449/id_rsa Username:docker}
	I0908 13:35:49.365800 1121483 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 13:35:49.371811 1121483 info.go:137] Remote host: Buildroot 2025.02
	I0908 13:35:49.371850 1121483 filesync.go:126] Scanning /home/jenkins/minikube-integration/21508-1116714/.minikube/addons for local assets ...
	I0908 13:35:49.371928 1121483 filesync.go:126] Scanning /home/jenkins/minikube-integration/21508-1116714/.minikube/files for local assets ...
	I0908 13:35:49.371949 1121483 start.go:296] duration metric: took 96.339384ms for postStartSetup
	I0908 13:35:49.371999 1121483 main.go:141] libmachine: (addons-674449) Calling .GetConfigRaw
	I0908 13:35:49.372714 1121483 main.go:141] libmachine: (addons-674449) Calling .GetIP
	I0908 13:35:49.375882 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:35:49.376371 1121483 main.go:141] libmachine: (addons-674449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:26:15", ip: ""} in network mk-addons-674449: {Iface:virbr1 ExpiryTime:2025-09-08 14:35:39 +0000 UTC Type:0 Mac:52:54:00:7c:26:15 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-674449 Clientid:01:52:54:00:7c:26:15}
	I0908 13:35:49.376413 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined IP address 192.168.39.135 and MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:35:49.376712 1121483 profile.go:143] Saving config to /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/addons-674449/config.json ...
	I0908 13:35:49.376933 1121483 start.go:128] duration metric: took 25.26891419s to createHost
	I0908 13:35:49.376987 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHHostname
	I0908 13:35:49.379903 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:35:49.380291 1121483 main.go:141] libmachine: (addons-674449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:26:15", ip: ""} in network mk-addons-674449: {Iface:virbr1 ExpiryTime:2025-09-08 14:35:39 +0000 UTC Type:0 Mac:52:54:00:7c:26:15 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-674449 Clientid:01:52:54:00:7c:26:15}
	I0908 13:35:49.380323 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined IP address 192.168.39.135 and MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:35:49.380499 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHPort
	I0908 13:35:49.380722 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHKeyPath
	I0908 13:35:49.380905 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHKeyPath
	I0908 13:35:49.381049 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHUsername
	I0908 13:35:49.381289 1121483 main.go:141] libmachine: Using SSH client type: native
	I0908 13:35:49.381511 1121483 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I0908 13:35:49.381533 1121483 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0908 13:35:49.489521 1121483 main.go:141] libmachine: SSH cmd err, output: <nil>: 1757338549.464650079
	
	I0908 13:35:49.489547 1121483 fix.go:216] guest clock: 1757338549.464650079
	I0908 13:35:49.489559 1121483 fix.go:229] Guest: 2025-09-08 13:35:49.464650079 +0000 UTC Remote: 2025-09-08 13:35:49.376963452 +0000 UTC m=+25.384506408 (delta=87.686627ms)
	I0908 13:35:49.489617 1121483 fix.go:200] guest clock delta is within tolerance: 87.686627ms
	I0908 13:35:49.489629 1121483 start.go:83] releasing machines lock for "addons-674449", held for 25.381695444s
	I0908 13:35:49.489678 1121483 main.go:141] libmachine: (addons-674449) Calling .DriverName
	I0908 13:35:49.489999 1121483 main.go:141] libmachine: (addons-674449) Calling .GetIP
	I0908 13:35:49.492743 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:35:49.493152 1121483 main.go:141] libmachine: (addons-674449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:26:15", ip: ""} in network mk-addons-674449: {Iface:virbr1 ExpiryTime:2025-09-08 14:35:39 +0000 UTC Type:0 Mac:52:54:00:7c:26:15 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-674449 Clientid:01:52:54:00:7c:26:15}
	I0908 13:35:49.493183 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined IP address 192.168.39.135 and MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:35:49.493348 1121483 main.go:141] libmachine: (addons-674449) Calling .DriverName
	I0908 13:35:49.493862 1121483 main.go:141] libmachine: (addons-674449) Calling .DriverName
	I0908 13:35:49.494040 1121483 main.go:141] libmachine: (addons-674449) Calling .DriverName
	I0908 13:35:49.494175 1121483 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0908 13:35:49.494226 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHHostname
	I0908 13:35:49.494278 1121483 ssh_runner.go:195] Run: cat /version.json
	I0908 13:35:49.494304 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHHostname
	I0908 13:35:49.497096 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:35:49.497289 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:35:49.497439 1121483 main.go:141] libmachine: (addons-674449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:26:15", ip: ""} in network mk-addons-674449: {Iface:virbr1 ExpiryTime:2025-09-08 14:35:39 +0000 UTC Type:0 Mac:52:54:00:7c:26:15 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-674449 Clientid:01:52:54:00:7c:26:15}
	I0908 13:35:49.497463 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined IP address 192.168.39.135 and MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:35:49.497624 1121483 main.go:141] libmachine: (addons-674449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:26:15", ip: ""} in network mk-addons-674449: {Iface:virbr1 ExpiryTime:2025-09-08 14:35:39 +0000 UTC Type:0 Mac:52:54:00:7c:26:15 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-674449 Clientid:01:52:54:00:7c:26:15}
	I0908 13:35:49.497648 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined IP address 192.168.39.135 and MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:35:49.497651 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHPort
	I0908 13:35:49.497854 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHPort
	I0908 13:35:49.497866 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHKeyPath
	I0908 13:35:49.498078 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHKeyPath
	I0908 13:35:49.498098 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHUsername
	I0908 13:35:49.498261 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHUsername
	I0908 13:35:49.498287 1121483 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/addons-674449/id_rsa Username:docker}
	I0908 13:35:49.498403 1121483 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/addons-674449/id_rsa Username:docker}
	I0908 13:35:49.577909 1121483 ssh_runner.go:195] Run: systemctl --version
	I0908 13:35:49.602839 1121483 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0908 13:35:49.766562 1121483 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0908 13:35:49.774320 1121483 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0908 13:35:49.774424 1121483 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 13:35:49.797803 1121483 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0908 13:35:49.797836 1121483 start.go:495] detecting cgroup driver to use...
	I0908 13:35:49.797932 1121483 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0908 13:35:49.824026 1121483 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 13:35:49.844421 1121483 docker.go:218] disabling cri-docker service (if available) ...
	I0908 13:35:49.844490 1121483 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0908 13:35:49.863693 1121483 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0908 13:35:49.883383 1121483 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0908 13:35:50.040895 1121483 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0908 13:35:50.196583 1121483 docker.go:234] disabling docker service ...
	I0908 13:35:50.196677 1121483 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0908 13:35:50.214617 1121483 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0908 13:35:50.231447 1121483 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0908 13:35:50.450006 1121483 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0908 13:35:50.597673 1121483 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 13:35:50.613915 1121483 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 13:35:50.638057 1121483 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0908 13:35:50.638128 1121483 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 13:35:50.651389 1121483 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0908 13:35:50.651491 1121483 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 13:35:50.664715 1121483 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 13:35:50.677475 1121483 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 13:35:50.690444 1121483 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 13:35:50.704257 1121483 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 13:35:50.717645 1121483 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 13:35:50.740740 1121483 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 13:35:50.754099 1121483 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 13:35:50.765658 1121483 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0908 13:35:50.765734 1121483 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0908 13:35:50.788816 1121483 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 13:35:50.802606 1121483 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 13:35:50.953490 1121483 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0908 13:35:51.388148 1121483 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0908 13:35:51.388288 1121483 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0908 13:35:51.394455 1121483 start.go:563] Will wait 60s for crictl version
	I0908 13:35:51.394557 1121483 ssh_runner.go:195] Run: which crictl
	I0908 13:35:51.399451 1121483 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 13:35:51.444122 1121483 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0908 13:35:51.444237 1121483 ssh_runner.go:195] Run: crio --version
	I0908 13:35:51.475976 1121483 ssh_runner.go:195] Run: crio --version
	I0908 13:35:51.583405 1121483 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	I0908 13:35:51.646050 1121483 main.go:141] libmachine: (addons-674449) Calling .GetIP
	I0908 13:35:51.649256 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:35:51.649838 1121483 main.go:141] libmachine: (addons-674449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:26:15", ip: ""} in network mk-addons-674449: {Iface:virbr1 ExpiryTime:2025-09-08 14:35:39 +0000 UTC Type:0 Mac:52:54:00:7c:26:15 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-674449 Clientid:01:52:54:00:7c:26:15}
	I0908 13:35:51.649875 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined IP address 192.168.39.135 and MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:35:51.650184 1121483 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0908 13:35:51.655367 1121483 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 13:35:51.672497 1121483 kubeadm.go:875] updating cluster {Name:addons-674449 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
0 ClusterName:addons-674449 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.135 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0908 13:35:51.672645 1121483 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 13:35:51.672711 1121483 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 13:35:51.713136 1121483 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.0". assuming images are not preloaded.
	I0908 13:35:51.713715 1121483 ssh_runner.go:195] Run: which lz4
	I0908 13:35:51.719714 1121483 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0908 13:35:51.724907 1121483 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0908 13:35:51.724955 1121483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409455026 bytes)
	I0908 13:35:53.371225 1121483 crio.go:462] duration metric: took 1.651565173s to copy over tarball
	I0908 13:35:53.371328 1121483 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0908 13:35:55.082706 1121483 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.711339797s)
	I0908 13:35:55.082751 1121483 crio.go:469] duration metric: took 1.71149014s to extract the tarball
	I0908 13:35:55.082764 1121483 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0908 13:35:55.125735 1121483 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 13:35:55.174281 1121483 crio.go:514] all images are preloaded for cri-o runtime.
	I0908 13:35:55.174323 1121483 cache_images.go:85] Images are preloaded, skipping loading
	I0908 13:35:55.174334 1121483 kubeadm.go:926] updating node { 192.168.39.135 8443 v1.34.0 crio true true} ...
	I0908 13:35:55.174468 1121483 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-674449 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.135
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:addons-674449 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0908 13:35:55.174555 1121483 ssh_runner.go:195] Run: crio config
	I0908 13:35:55.232556 1121483 cni.go:84] Creating CNI manager for ""
	I0908 13:35:55.232585 1121483 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0908 13:35:55.232600 1121483 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0908 13:35:55.232625 1121483 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.135 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-674449 NodeName:addons-674449 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.135"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.135 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0908 13:35:55.232794 1121483 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.135
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-674449"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.135"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.135"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0908 13:35:55.232865 1121483 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0908 13:35:55.245934 1121483 binaries.go:44] Found k8s binaries, skipping transfer
	I0908 13:35:55.246025 1121483 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0908 13:35:55.259082 1121483 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0908 13:35:55.282122 1121483 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0908 13:35:55.305475 1121483 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I0908 13:35:55.328107 1121483 ssh_runner.go:195] Run: grep 192.168.39.135	control-plane.minikube.internal$ /etc/hosts
	I0908 13:35:55.332699 1121483 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.135	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 13:35:55.348397 1121483 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 13:35:55.493452 1121483 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 13:35:55.527196 1121483 certs.go:68] Setting up /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/addons-674449 for IP: 192.168.39.135
	I0908 13:35:55.527228 1121483 certs.go:194] generating shared ca certs ...
	I0908 13:35:55.527254 1121483 certs.go:226] acquiring lock for ca certs: {Name:mk10dcd85eee4d8b0413bd848f61156bf964b6ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:35:55.527431 1121483 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21508-1116714/.minikube/ca.key
	I0908 13:35:55.581265 1121483 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21508-1116714/.minikube/ca.crt ...
	I0908 13:35:55.581299 1121483 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-1116714/.minikube/ca.crt: {Name:mkb841254efa120fc223dc001b38e47f4b61e287 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:35:55.581508 1121483 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21508-1116714/.minikube/ca.key ...
	I0908 13:35:55.581522 1121483 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-1116714/.minikube/ca.key: {Name:mkda63d98877189beec428a541ac76fbe17d5e56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:35:55.581627 1121483 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21508-1116714/.minikube/proxy-client-ca.key
	I0908 13:35:55.999742 1121483 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21508-1116714/.minikube/proxy-client-ca.crt ...
	I0908 13:35:55.999783 1121483 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-1116714/.minikube/proxy-client-ca.crt: {Name:mk1bfdea7307634a91cbdd6c5bda81e167fbca12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:35:56.000006 1121483 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21508-1116714/.minikube/proxy-client-ca.key ...
	I0908 13:35:56.000023 1121483 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-1116714/.minikube/proxy-client-ca.key: {Name:mk8d3ac295fc6c031b1a1a4a6cea4587c812b2c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:35:56.000137 1121483 certs.go:256] generating profile certs ...
	I0908 13:35:56.000227 1121483 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/addons-674449/client.key
	I0908 13:35:56.000248 1121483 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/addons-674449/client.crt with IP's: []
	I0908 13:35:56.169952 1121483 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/addons-674449/client.crt ...
	I0908 13:35:56.169994 1121483 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/addons-674449/client.crt: {Name:mk299c9b5026246c7da12f75012376af87563272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:35:56.170215 1121483 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/addons-674449/client.key ...
	I0908 13:35:56.170234 1121483 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/addons-674449/client.key: {Name:mk01bcda3c39c55efd2c44722d9d80e6f0f96524 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:35:56.170343 1121483 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/addons-674449/apiserver.key.48c0a353
	I0908 13:35:56.170374 1121483 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/addons-674449/apiserver.crt.48c0a353 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.135]
	I0908 13:35:56.288683 1121483 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/addons-674449/apiserver.crt.48c0a353 ...
	I0908 13:35:56.288721 1121483 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/addons-674449/apiserver.crt.48c0a353: {Name:mk11a5f304bb4d9240d93ae6505d287780c620b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:35:56.288953 1121483 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/addons-674449/apiserver.key.48c0a353 ...
	I0908 13:35:56.288974 1121483 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/addons-674449/apiserver.key.48c0a353: {Name:mk1f66a8149139a3022d5774b744ca957f84333b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:35:56.289085 1121483 certs.go:381] copying /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/addons-674449/apiserver.crt.48c0a353 -> /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/addons-674449/apiserver.crt
	I0908 13:35:56.289237 1121483 certs.go:385] copying /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/addons-674449/apiserver.key.48c0a353 -> /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/addons-674449/apiserver.key
	I0908 13:35:56.289327 1121483 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/addons-674449/proxy-client.key
	I0908 13:35:56.289365 1121483 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/addons-674449/proxy-client.crt with IP's: []
	I0908 13:35:56.517907 1121483 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/addons-674449/proxy-client.crt ...
	I0908 13:35:56.517948 1121483 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/addons-674449/proxy-client.crt: {Name:mke58dcf28e8b03334b49bfb1c6be66015c15024 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:35:56.518156 1121483 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/addons-674449/proxy-client.key ...
	I0908 13:35:56.518175 1121483 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/addons-674449/proxy-client.key: {Name:mk8bd6870dea72d6f36f2a00e0dcc69cf20b3c81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:35:56.518391 1121483 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca-key.pem (1679 bytes)
	I0908 13:35:56.518442 1121483 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca.pem (1082 bytes)
	I0908 13:35:56.518475 1121483 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/cert.pem (1123 bytes)
	I0908 13:35:56.518510 1121483 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/key.pem (1675 bytes)
	I0908 13:35:56.519162 1121483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0908 13:35:56.552834 1121483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0908 13:35:56.586185 1121483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0908 13:35:56.619421 1121483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0908 13:35:56.652508 1121483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/addons-674449/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0908 13:35:56.683992 1121483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/addons-674449/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0908 13:35:56.715989 1121483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/addons-674449/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0908 13:35:56.750557 1121483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/addons-674449/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0908 13:35:56.784621 1121483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0908 13:35:56.819776 1121483 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0908 13:35:56.843444 1121483 ssh_runner.go:195] Run: openssl version
	I0908 13:35:56.851060 1121483 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0908 13:35:56.866485 1121483 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0908 13:35:56.873155 1121483 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  8 13:35 /usr/share/ca-certificates/minikubeCA.pem
	I0908 13:35:56.873249 1121483 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0908 13:35:56.882172 1121483 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0908 13:35:56.898584 1121483 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0908 13:35:56.904803 1121483 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0908 13:35:56.904883 1121483 kubeadm.go:392] StartCluster: {Name:addons-674449 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 C
lusterName:addons-674449 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.135 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 13:35:56.904965 1121483 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0908 13:35:56.905025 1121483 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0908 13:35:56.952185 1121483 cri.go:89] found id: ""
	I0908 13:35:56.952269 1121483 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0908 13:35:56.965761 1121483 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0908 13:35:56.979484 1121483 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0908 13:35:56.993302 1121483 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0908 13:35:56.993328 1121483 kubeadm.go:157] found existing configuration files:
	
	I0908 13:35:56.993384 1121483 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0908 13:35:57.005851 1121483 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0908 13:35:57.005979 1121483 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0908 13:35:57.019162 1121483 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0908 13:35:57.031553 1121483 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0908 13:35:57.031672 1121483 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0908 13:35:57.045257 1121483 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0908 13:35:57.057885 1121483 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0908 13:35:57.057952 1121483 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0908 13:35:57.072893 1121483 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0908 13:35:57.085796 1121483 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0908 13:35:57.085891 1121483 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0908 13:35:57.099690 1121483 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0908 13:35:57.285747 1121483 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0908 13:36:08.533801 1121483 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0908 13:36:08.533928 1121483 kubeadm.go:310] [preflight] Running pre-flight checks
	I0908 13:36:08.534076 1121483 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0908 13:36:08.534232 1121483 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0908 13:36:08.534324 1121483 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0908 13:36:08.534427 1121483 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0908 13:36:08.536114 1121483 out.go:252]   - Generating certificates and keys ...
	I0908 13:36:08.536213 1121483 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0908 13:36:08.536295 1121483 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0908 13:36:08.536369 1121483 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0908 13:36:08.536422 1121483 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0908 13:36:08.536471 1121483 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0908 13:36:08.536515 1121483 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0908 13:36:08.536577 1121483 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0908 13:36:08.536758 1121483 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-674449 localhost] and IPs [192.168.39.135 127.0.0.1 ::1]
	I0908 13:36:08.536840 1121483 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0908 13:36:08.536980 1121483 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-674449 localhost] and IPs [192.168.39.135 127.0.0.1 ::1]
	I0908 13:36:08.537056 1121483 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0908 13:36:08.537120 1121483 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0908 13:36:08.537158 1121483 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0908 13:36:08.537208 1121483 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0908 13:36:08.537254 1121483 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0908 13:36:08.537303 1121483 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0908 13:36:08.537360 1121483 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0908 13:36:08.537412 1121483 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0908 13:36:08.537465 1121483 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0908 13:36:08.537539 1121483 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0908 13:36:08.537599 1121483 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0908 13:36:08.538854 1121483 out.go:252]   - Booting up control plane ...
	I0908 13:36:08.538963 1121483 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0908 13:36:08.539031 1121483 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0908 13:36:08.539116 1121483 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0908 13:36:08.539222 1121483 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0908 13:36:08.539310 1121483 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0908 13:36:08.539407 1121483 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0908 13:36:08.539475 1121483 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0908 13:36:08.539508 1121483 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0908 13:36:08.539628 1121483 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0908 13:36:08.539744 1121483 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0908 13:36:08.539799 1121483 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.212722ms
	I0908 13:36:08.539892 1121483 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0908 13:36:08.539977 1121483 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.39.135:8443/livez
	I0908 13:36:08.540060 1121483 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0908 13:36:08.540129 1121483 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0908 13:36:08.540198 1121483 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.547806335s
	I0908 13:36:08.540257 1121483 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 3.762446377s
	I0908 13:36:08.540312 1121483 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 6.003710536s
	I0908 13:36:08.540403 1121483 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0908 13:36:08.540506 1121483 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0908 13:36:08.540565 1121483 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0908 13:36:08.540736 1121483 kubeadm.go:310] [mark-control-plane] Marking the node addons-674449 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0908 13:36:08.540812 1121483 kubeadm.go:310] [bootstrap-token] Using token: ukqkal.d96jvos460bmo47j
	I0908 13:36:08.542333 1121483 out.go:252]   - Configuring RBAC rules ...
	I0908 13:36:08.542440 1121483 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0908 13:36:08.542516 1121483 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0908 13:36:08.542649 1121483 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0908 13:36:08.542789 1121483 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0908 13:36:08.542894 1121483 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0908 13:36:08.542973 1121483 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0908 13:36:08.543096 1121483 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0908 13:36:08.543145 1121483 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0908 13:36:08.543183 1121483 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0908 13:36:08.543189 1121483 kubeadm.go:310] 
	I0908 13:36:08.543238 1121483 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0908 13:36:08.543244 1121483 kubeadm.go:310] 
	I0908 13:36:08.543309 1121483 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0908 13:36:08.543315 1121483 kubeadm.go:310] 
	I0908 13:36:08.543340 1121483 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0908 13:36:08.543396 1121483 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0908 13:36:08.543441 1121483 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0908 13:36:08.543447 1121483 kubeadm.go:310] 
	I0908 13:36:08.543492 1121483 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0908 13:36:08.543501 1121483 kubeadm.go:310] 
	I0908 13:36:08.543541 1121483 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0908 13:36:08.543547 1121483 kubeadm.go:310] 
	I0908 13:36:08.543605 1121483 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0908 13:36:08.543704 1121483 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0908 13:36:08.543796 1121483 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0908 13:36:08.543806 1121483 kubeadm.go:310] 
	I0908 13:36:08.543873 1121483 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0908 13:36:08.543941 1121483 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0908 13:36:08.543944 1121483 kubeadm.go:310] 
	I0908 13:36:08.544015 1121483 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ukqkal.d96jvos460bmo47j \
	I0908 13:36:08.544114 1121483 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b74fdb5b49b8a5f2d0d805722ad58fb11edbe1ed30e10a54ed528060545c93ba \
	I0908 13:36:08.544134 1121483 kubeadm.go:310] 	--control-plane 
	I0908 13:36:08.544140 1121483 kubeadm.go:310] 
	I0908 13:36:08.544208 1121483 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0908 13:36:08.544217 1121483 kubeadm.go:310] 
	I0908 13:36:08.544284 1121483 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ukqkal.d96jvos460bmo47j \
	I0908 13:36:08.544396 1121483 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b74fdb5b49b8a5f2d0d805722ad58fb11edbe1ed30e10a54ed528060545c93ba 
	I0908 13:36:08.544408 1121483 cni.go:84] Creating CNI manager for ""
	I0908 13:36:08.544416 1121483 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0908 13:36:08.546046 1121483 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0908 13:36:08.547894 1121483 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0908 13:36:08.563325 1121483 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0908 13:36:08.592491 1121483 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0908 13:36:08.592603 1121483 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 13:36:08.592603 1121483 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-674449 minikube.k8s.io/updated_at=2025_09_08T13_36_08_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=3f6dd380c737091fd766d425b85ffa6c4f72b9ba minikube.k8s.io/name=addons-674449 minikube.k8s.io/primary=true
	I0908 13:36:08.648419 1121483 ops.go:34] apiserver oom_adj: -16
	I0908 13:36:08.753108 1121483 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 13:36:09.254083 1121483 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 13:36:09.753595 1121483 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 13:36:10.253584 1121483 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 13:36:10.753182 1121483 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 13:36:11.253294 1121483 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 13:36:11.753963 1121483 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 13:36:12.253541 1121483 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 13:36:12.753317 1121483 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 13:36:13.253928 1121483 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 13:36:13.753551 1121483 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 13:36:13.903395 1121483 kubeadm.go:1105] duration metric: took 5.310864573s to wait for elevateKubeSystemPrivileges
	I0908 13:36:13.903449 1121483 kubeadm.go:394] duration metric: took 16.998572777s to StartCluster
	I0908 13:36:13.903478 1121483 settings.go:142] acquiring lock: {Name:mkc208e3a70732deaf67c191918f201f73e82457 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:36:13.903681 1121483 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21508-1116714/kubeconfig
	I0908 13:36:13.904407 1121483 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-1116714/kubeconfig: {Name:mk93422b0007d912fa8f198f71d62d01a418d566 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:36:13.904660 1121483 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0908 13:36:13.904687 1121483 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.135 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0908 13:36:13.904773 1121483 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0908 13:36:13.904966 1121483 addons.go:69] Setting yakd=true in profile "addons-674449"
	I0908 13:36:13.904991 1121483 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-674449"
	I0908 13:36:13.905010 1121483 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-674449"
	I0908 13:36:13.905019 1121483 config.go:182] Loaded profile config "addons-674449": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 13:36:13.905033 1121483 addons.go:238] Setting addon yakd=true in "addons-674449"
	I0908 13:36:13.905029 1121483 addons.go:69] Setting registry-creds=true in profile "addons-674449"
	I0908 13:36:13.905052 1121483 host.go:66] Checking if "addons-674449" exists ...
	I0908 13:36:13.905062 1121483 host.go:66] Checking if "addons-674449" exists ...
	I0908 13:36:13.905061 1121483 addons.go:69] Setting storage-provisioner=true in profile "addons-674449"
	I0908 13:36:13.905076 1121483 addons.go:69] Setting metrics-server=true in profile "addons-674449"
	I0908 13:36:13.905080 1121483 addons.go:238] Setting addon storage-provisioner=true in "addons-674449"
	I0908 13:36:13.905086 1121483 addons.go:238] Setting addon metrics-server=true in "addons-674449"
	I0908 13:36:13.905087 1121483 addons.go:69] Setting volcano=true in profile "addons-674449"
	I0908 13:36:13.905104 1121483 addons.go:238] Setting addon volcano=true in "addons-674449"
	I0908 13:36:13.905121 1121483 host.go:66] Checking if "addons-674449" exists ...
	I0908 13:36:13.905132 1121483 addons.go:69] Setting default-storageclass=true in profile "addons-674449"
	I0908 13:36:13.905139 1121483 host.go:66] Checking if "addons-674449" exists ...
	I0908 13:36:13.905147 1121483 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-674449"
	I0908 13:36:13.905311 1121483 addons.go:69] Setting volumesnapshots=true in profile "addons-674449"
	I0908 13:36:13.905353 1121483 addons.go:238] Setting addon volumesnapshots=true in "addons-674449"
	I0908 13:36:13.905392 1121483 host.go:66] Checking if "addons-674449" exists ...
	I0908 13:36:13.904979 1121483 addons.go:69] Setting inspektor-gadget=true in profile "addons-674449"
	I0908 13:36:13.905033 1121483 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-674449"
	I0908 13:36:13.905121 1121483 host.go:66] Checking if "addons-674449" exists ...
	I0908 13:36:13.905551 1121483 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 13:36:13.905563 1121483 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-674449"
	I0908 13:36:13.905575 1121483 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 13:36:13.905589 1121483 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 13:36:13.905608 1121483 addons.go:69] Setting gcp-auth=true in profile "addons-674449"
	I0908 13:36:13.905618 1121483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 13:36:13.905626 1121483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 13:36:13.905636 1121483 addons.go:69] Setting ingress=true in profile "addons-674449"
	I0908 13:36:13.905647 1121483 addons.go:238] Setting addon ingress=true in "addons-674449"
	I0908 13:36:13.905628 1121483 mustload.go:65] Loading cluster: addons-674449
	I0908 13:36:13.905683 1121483 addons.go:69] Setting ingress-dns=true in profile "addons-674449"
	I0908 13:36:13.905694 1121483 addons.go:69] Setting cloud-spanner=true in profile "addons-674449"
	I0908 13:36:13.905705 1121483 addons.go:238] Setting addon cloud-spanner=true in "addons-674449"
	I0908 13:36:13.905707 1121483 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 13:36:13.905726 1121483 host.go:66] Checking if "addons-674449" exists ...
	I0908 13:36:13.905728 1121483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 13:36:13.905788 1121483 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 13:36:13.905805 1121483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 13:36:13.905810 1121483 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 13:36:13.905843 1121483 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-674449"
	I0908 13:36:13.905844 1121483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 13:36:13.905881 1121483 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-674449"
	I0908 13:36:13.905065 1121483 addons.go:238] Setting addon registry-creds=true in "addons-674449"
	I0908 13:36:13.905596 1121483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 13:36:13.905904 1121483 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-674449"
	I0908 13:36:13.905915 1121483 addons.go:69] Setting registry=true in profile "addons-674449"
	I0908 13:36:13.905927 1121483 addons.go:238] Setting addon registry=true in "addons-674449"
	I0908 13:36:13.905948 1121483 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 13:36:13.905952 1121483 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 13:36:13.905974 1121483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 13:36:13.905979 1121483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 13:36:13.905547 1121483 addons.go:238] Setting addon inspektor-gadget=true in "addons-674449"
	I0908 13:36:13.905695 1121483 addons.go:238] Setting addon ingress-dns=true in "addons-674449"
	I0908 13:36:13.906046 1121483 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 13:36:13.906064 1121483 host.go:66] Checking if "addons-674449" exists ...
	I0908 13:36:13.905951 1121483 host.go:66] Checking if "addons-674449" exists ...
	I0908 13:36:13.906142 1121483 host.go:66] Checking if "addons-674449" exists ...
	I0908 13:36:13.906420 1121483 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 13:36:13.906451 1121483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 13:36:13.906475 1121483 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 13:36:13.906067 1121483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 13:36:13.905916 1121483 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-674449"
	I0908 13:36:13.906513 1121483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 13:36:13.906554 1121483 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 13:36:13.906589 1121483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 13:36:13.906674 1121483 host.go:66] Checking if "addons-674449" exists ...
	I0908 13:36:13.906740 1121483 host.go:66] Checking if "addons-674449" exists ...
	I0908 13:36:13.907052 1121483 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 13:36:13.907085 1121483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 13:36:13.907116 1121483 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 13:36:13.907150 1121483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 13:36:13.907157 1121483 host.go:66] Checking if "addons-674449" exists ...
	I0908 13:36:13.907273 1121483 host.go:66] Checking if "addons-674449" exists ...
	I0908 13:36:13.908030 1121483 out.go:179] * Verifying Kubernetes components...
	I0908 13:36:13.909750 1121483 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 13:36:13.929023 1121483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41429
	I0908 13:36:13.929039 1121483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37667
	I0908 13:36:13.929225 1121483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41525
	I0908 13:36:13.929242 1121483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46149
	I0908 13:36:13.929344 1121483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45035
	I0908 13:36:13.929782 1121483 main.go:141] libmachine: () Calling .GetVersion
	I0908 13:36:13.929922 1121483 main.go:141] libmachine: () Calling .GetVersion
	I0908 13:36:13.929929 1121483 main.go:141] libmachine: () Calling .GetVersion
	I0908 13:36:13.929991 1121483 main.go:141] libmachine: () Calling .GetVersion
	I0908 13:36:13.930426 1121483 main.go:141] libmachine: () Calling .GetVersion
	I0908 13:36:13.930500 1121483 main.go:141] libmachine: Using API Version  1
	I0908 13:36:13.930513 1121483 main.go:141] libmachine: Using API Version  1
	I0908 13:36:13.930517 1121483 main.go:141] libmachine: Using API Version  1
	I0908 13:36:13.930535 1121483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 13:36:13.930539 1121483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 13:36:13.930518 1121483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 13:36:13.930999 1121483 main.go:141] libmachine: () Calling .GetMachineName
	I0908 13:36:13.931064 1121483 main.go:141] libmachine: Using API Version  1
	I0908 13:36:13.931082 1121483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 13:36:13.931094 1121483 main.go:141] libmachine: Using API Version  1
	I0908 13:36:13.931110 1121483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 13:36:13.931148 1121483 main.go:141] libmachine: () Calling .GetMachineName
	I0908 13:36:13.931352 1121483 main.go:141] libmachine: () Calling .GetMachineName
	I0908 13:36:13.931604 1121483 main.go:141] libmachine: () Calling .GetMachineName
	I0908 13:36:13.931621 1121483 main.go:141] libmachine: (addons-674449) Calling .GetState
	I0908 13:36:13.931815 1121483 main.go:141] libmachine: () Calling .GetMachineName
	I0908 13:36:13.931887 1121483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32853
	I0908 13:36:13.932196 1121483 main.go:141] libmachine: (addons-674449) Calling .GetState
	I0908 13:36:13.936112 1121483 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 13:36:13.936157 1121483 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 13:36:13.936186 1121483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 13:36:13.936205 1121483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 13:36:13.936337 1121483 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 13:36:13.936376 1121483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 13:36:13.937580 1121483 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-674449"
	I0908 13:36:13.937624 1121483 host.go:66] Checking if "addons-674449" exists ...
	I0908 13:36:13.937862 1121483 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 13:36:13.937891 1121483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 13:36:13.945480 1121483 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 13:36:13.945559 1121483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 13:36:13.948385 1121483 config.go:182] Loaded profile config "addons-674449": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 13:36:13.948682 1121483 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 13:36:13.948745 1121483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 13:36:13.948822 1121483 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 13:36:13.948867 1121483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 13:36:13.950251 1121483 addons.go:238] Setting addon default-storageclass=true in "addons-674449"
	I0908 13:36:13.950303 1121483 host.go:66] Checking if "addons-674449" exists ...
	I0908 13:36:13.950681 1121483 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 13:36:13.950737 1121483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 13:36:13.956233 1121483 main.go:141] libmachine: () Calling .GetVersion
	I0908 13:36:13.967820 1121483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41471
	I0908 13:36:13.968028 1121483 main.go:141] libmachine: Using API Version  1
	I0908 13:36:13.968066 1121483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 13:36:13.968647 1121483 main.go:141] libmachine: () Calling .GetVersion
	I0908 13:36:13.968713 1121483 main.go:141] libmachine: () Calling .GetMachineName
	I0908 13:36:13.969570 1121483 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 13:36:13.969630 1121483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 13:36:13.969938 1121483 main.go:141] libmachine: Using API Version  1
	I0908 13:36:13.969959 1121483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 13:36:13.970673 1121483 main.go:141] libmachine: () Calling .GetMachineName
	I0908 13:36:13.971113 1121483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33593
	I0908 13:36:13.971388 1121483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45801
	I0908 13:36:13.971454 1121483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42577
	I0908 13:36:13.972066 1121483 main.go:141] libmachine: () Calling .GetVersion
	I0908 13:36:13.972627 1121483 main.go:141] libmachine: Using API Version  1
	I0908 13:36:13.972662 1121483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 13:36:13.973157 1121483 main.go:141] libmachine: () Calling .GetMachineName
	I0908 13:36:13.982738 1121483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46737
	I0908 13:36:13.983690 1121483 main.go:141] libmachine: () Calling .GetVersion
	I0908 13:36:13.984911 1121483 main.go:141] libmachine: Using API Version  1
	I0908 13:36:13.984943 1121483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 13:36:13.985525 1121483 main.go:141] libmachine: () Calling .GetMachineName
	I0908 13:36:13.985991 1121483 main.go:141] libmachine: (addons-674449) Calling .GetState
	I0908 13:36:13.989209 1121483 main.go:141] libmachine: (addons-674449) Calling .DriverName
	I0908 13:36:13.991422 1121483 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0908 13:36:13.992819 1121483 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0908 13:36:13.992851 1121483 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0908 13:36:13.992884 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHHostname
	I0908 13:36:13.993196 1121483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38173
	I0908 13:36:13.993804 1121483 main.go:141] libmachine: () Calling .GetVersion
	I0908 13:36:13.994493 1121483 main.go:141] libmachine: Using API Version  1
	I0908 13:36:13.994519 1121483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 13:36:13.994922 1121483 main.go:141] libmachine: () Calling .GetMachineName
	I0908 13:36:13.995256 1121483 main.go:141] libmachine: (addons-674449) Calling .GetState
	I0908 13:36:13.997478 1121483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46449
	I0908 13:36:13.997725 1121483 main.go:141] libmachine: (addons-674449) Calling .DriverName
	I0908 13:36:13.997750 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:36:13.997727 1121483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36521
	I0908 13:36:13.998217 1121483 main.go:141] libmachine: (addons-674449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:26:15", ip: ""} in network mk-addons-674449: {Iface:virbr1 ExpiryTime:2025-09-08 14:35:39 +0000 UTC Type:0 Mac:52:54:00:7c:26:15 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-674449 Clientid:01:52:54:00:7c:26:15}
	I0908 13:36:13.998240 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined IP address 192.168.39.135 and MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:36:13.998268 1121483 main.go:141] libmachine: () Calling .GetVersion
	I0908 13:36:13.998559 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHPort
	I0908 13:36:13.998733 1121483 main.go:141] libmachine: Using API Version  1
	I0908 13:36:13.998748 1121483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 13:36:13.998913 1121483 main.go:141] libmachine: () Calling .GetVersion
	I0908 13:36:13.998958 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHKeyPath
	I0908 13:36:13.999123 1121483 main.go:141] libmachine: () Calling .GetMachineName
	I0908 13:36:13.999126 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHUsername
	I0908 13:36:13.999271 1121483 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/addons-674449/id_rsa Username:docker}
	I0908 13:36:13.999770 1121483 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 13:36:13.999826 1121483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 13:36:14.000019 1121483 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0908 13:36:14.000190 1121483 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 13:36:14.000228 1121483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 13:36:14.000426 1121483 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 13:36:14.000453 1121483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 13:36:14.000627 1121483 main.go:141] libmachine: Using API Version  1
	I0908 13:36:14.000649 1121483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 13:36:14.000993 1121483 main.go:141] libmachine: () Calling .GetVersion
	I0908 13:36:14.001009 1121483 main.go:141] libmachine: () Calling .GetVersion
	I0908 13:36:14.001495 1121483 main.go:141] libmachine: Using API Version  1
	I0908 13:36:14.001517 1121483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 13:36:14.001623 1121483 main.go:141] libmachine: Using API Version  1
	I0908 13:36:14.001647 1121483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 13:36:14.001725 1121483 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0908 13:36:14.001747 1121483 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0908 13:36:14.001769 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHHostname
	I0908 13:36:14.001926 1121483 main.go:141] libmachine: () Calling .GetMachineName
	I0908 13:36:14.001991 1121483 main.go:141] libmachine: () Calling .GetMachineName
	I0908 13:36:14.002035 1121483 main.go:141] libmachine: () Calling .GetMachineName
	I0908 13:36:14.002756 1121483 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 13:36:14.002806 1121483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 13:36:14.003431 1121483 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 13:36:14.003478 1121483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 13:36:14.003890 1121483 main.go:141] libmachine: (addons-674449) Calling .GetState
	I0908 13:36:14.006111 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:36:14.006639 1121483 main.go:141] libmachine: (addons-674449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:26:15", ip: ""} in network mk-addons-674449: {Iface:virbr1 ExpiryTime:2025-09-08 14:35:39 +0000 UTC Type:0 Mac:52:54:00:7c:26:15 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-674449 Clientid:01:52:54:00:7c:26:15}
	I0908 13:36:14.006660 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined IP address 192.168.39.135 and MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:36:14.007003 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHPort
	I0908 13:36:14.007261 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHKeyPath
	I0908 13:36:14.007513 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHUsername
	I0908 13:36:14.007685 1121483 host.go:66] Checking if "addons-674449" exists ...
	I0908 13:36:14.008152 1121483 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 13:36:14.008200 1121483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 13:36:14.008451 1121483 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/addons-674449/id_rsa Username:docker}
	I0908 13:36:14.018358 1121483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33905
	I0908 13:36:14.019179 1121483 main.go:141] libmachine: () Calling .GetVersion
	I0908 13:36:14.019860 1121483 main.go:141] libmachine: Using API Version  1
	I0908 13:36:14.019889 1121483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 13:36:14.020403 1121483 main.go:141] libmachine: () Calling .GetMachineName
	I0908 13:36:14.021281 1121483 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 13:36:14.021344 1121483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 13:36:14.024300 1121483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33991
	I0908 13:36:14.025025 1121483 main.go:141] libmachine: () Calling .GetVersion
	I0908 13:36:14.025714 1121483 main.go:141] libmachine: Using API Version  1
	I0908 13:36:14.025736 1121483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 13:36:14.026225 1121483 main.go:141] libmachine: () Calling .GetMachineName
	I0908 13:36:14.026866 1121483 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 13:36:14.026919 1121483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 13:36:14.030553 1121483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46467
	I0908 13:36:14.031462 1121483 main.go:141] libmachine: () Calling .GetVersion
	I0908 13:36:14.032306 1121483 main.go:141] libmachine: Using API Version  1
	I0908 13:36:14.032334 1121483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 13:36:14.032793 1121483 main.go:141] libmachine: () Calling .GetMachineName
	I0908 13:36:14.033494 1121483 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 13:36:14.033553 1121483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 13:36:14.034173 1121483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33445
	I0908 13:36:14.035564 1121483 main.go:141] libmachine: () Calling .GetVersion
	I0908 13:36:14.044457 1121483 main.go:141] libmachine: Using API Version  1
	I0908 13:36:14.044513 1121483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 13:36:14.045116 1121483 main.go:141] libmachine: () Calling .GetMachineName
	I0908 13:36:14.045783 1121483 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 13:36:14.045846 1121483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 13:36:14.046363 1121483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35939
	I0908 13:36:14.046439 1121483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36231
	I0908 13:36:14.047181 1121483 main.go:141] libmachine: () Calling .GetVersion
	I0908 13:36:14.047992 1121483 main.go:141] libmachine: Using API Version  1
	I0908 13:36:14.048017 1121483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 13:36:14.048124 1121483 main.go:141] libmachine: () Calling .GetVersion
	I0908 13:36:14.048537 1121483 main.go:141] libmachine: () Calling .GetMachineName
	I0908 13:36:14.048783 1121483 main.go:141] libmachine: (addons-674449) Calling .GetState
	I0908 13:36:14.049046 1121483 main.go:141] libmachine: Using API Version  1
	I0908 13:36:14.049074 1121483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 13:36:14.049147 1121483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33223
	I0908 13:36:14.049557 1121483 main.go:141] libmachine: () Calling .GetVersion
	I0908 13:36:14.050314 1121483 main.go:141] libmachine: () Calling .GetMachineName
	I0908 13:36:14.050929 1121483 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 13:36:14.050985 1121483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 13:36:14.051475 1121483 main.go:141] libmachine: Using API Version  1
	I0908 13:36:14.051497 1121483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 13:36:14.052072 1121483 main.go:141] libmachine: () Calling .GetMachineName
	I0908 13:36:14.052379 1121483 main.go:141] libmachine: (addons-674449) Calling .DriverName
	I0908 13:36:14.054268 1121483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38483
	I0908 13:36:14.054444 1121483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39161
	I0908 13:36:14.054606 1121483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36115
	I0908 13:36:14.055093 1121483 main.go:141] libmachine: () Calling .GetVersion
	I0908 13:36:14.055266 1121483 main.go:141] libmachine: () Calling .GetVersion
	I0908 13:36:14.056050 1121483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37171
	I0908 13:36:14.056153 1121483 main.go:141] libmachine: Using API Version  1
	I0908 13:36:14.056173 1121483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 13:36:14.056531 1121483 main.go:141] libmachine: Using API Version  1
	I0908 13:36:14.056552 1121483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 13:36:14.056636 1121483 main.go:141] libmachine: () Calling .GetVersion
	I0908 13:36:14.057124 1121483 main.go:141] libmachine: () Calling .GetMachineName
	I0908 13:36:14.057177 1121483 main.go:141] libmachine: () Calling .GetMachineName
	I0908 13:36:14.057295 1121483 main.go:141] libmachine: Using API Version  1
	I0908 13:36:14.057315 1121483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 13:36:14.057387 1121483 main.go:141] libmachine: (addons-674449) Calling .DriverName
	I0908 13:36:14.057949 1121483 main.go:141] libmachine: (addons-674449) Calling .GetState
	I0908 13:36:14.057959 1121483 main.go:141] libmachine: (addons-674449) Calling .GetState
	I0908 13:36:14.057972 1121483 main.go:141] libmachine: () Calling .GetMachineName
	I0908 13:36:14.057953 1121483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43401
	I0908 13:36:14.057995 1121483 main.go:141] libmachine: Making call to close driver server
	I0908 13:36:14.058008 1121483 main.go:141] libmachine: (addons-674449) Calling .Close
	I0908 13:36:14.061935 1121483 main.go:141] libmachine: (addons-674449) Calling .GetState
	I0908 13:36:14.061996 1121483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35537
	I0908 13:36:14.062090 1121483 main.go:141] libmachine: () Calling .GetVersion
	I0908 13:36:14.061940 1121483 main.go:141] libmachine: (addons-674449) Calling .DriverName
	I0908 13:36:14.062198 1121483 main.go:141] libmachine: (addons-674449) DBG | Closing plugin on server side
	I0908 13:36:14.062216 1121483 main.go:141] libmachine: Successfully made call to close driver server
	I0908 13:36:14.062228 1121483 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 13:36:14.062240 1121483 main.go:141] libmachine: Making call to close driver server
	I0908 13:36:14.062249 1121483 main.go:141] libmachine: (addons-674449) Calling .Close
	I0908 13:36:14.062325 1121483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35813
	I0908 13:36:14.062756 1121483 main.go:141] libmachine: (addons-674449) DBG | Closing plugin on server side
	I0908 13:36:14.062798 1121483 main.go:141] libmachine: Successfully made call to close driver server
	I0908 13:36:14.062807 1121483 main.go:141] libmachine: Making call to close connection to plugin binary
	W0908 13:36:14.062919 1121483 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0908 13:36:14.063377 1121483 main.go:141] libmachine: () Calling .GetVersion
	I0908 13:36:14.064104 1121483 main.go:141] libmachine: Using API Version  1
	I0908 13:36:14.064136 1121483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 13:36:14.064335 1121483 main.go:141] libmachine: () Calling .GetVersion
	I0908 13:36:14.065199 1121483 main.go:141] libmachine: (addons-674449) Calling .DriverName
	I0908 13:36:14.065396 1121483 main.go:141] libmachine: Using API Version  1
	I0908 13:36:14.065425 1121483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 13:36:14.065652 1121483 main.go:141] libmachine: () Calling .GetVersion
	I0908 13:36:14.066213 1121483 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.40
	I0908 13:36:14.066246 1121483 main.go:141] libmachine: Using API Version  1
	I0908 13:36:14.066622 1121483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 13:36:14.066281 1121483 main.go:141] libmachine: () Calling .GetMachineName
	I0908 13:36:14.066556 1121483 main.go:141] libmachine: (addons-674449) Calling .DriverName
	I0908 13:36:14.066967 1121483 main.go:141] libmachine: Using API Version  1
	I0908 13:36:14.066984 1121483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 13:36:14.067072 1121483 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0908 13:36:14.067204 1121483 main.go:141] libmachine: (addons-674449) Calling .GetState
	I0908 13:36:14.067389 1121483 main.go:141] libmachine: () Calling .GetMachineName
	I0908 13:36:14.067832 1121483 main.go:141] libmachine: (addons-674449) Calling .GetState
	I0908 13:36:14.067903 1121483 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0908 13:36:14.067925 1121483 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0908 13:36:14.067952 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHHostname
	I0908 13:36:14.068954 1121483 main.go:141] libmachine: () Calling .GetMachineName
	I0908 13:36:14.069482 1121483 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0908 13:36:14.070734 1121483 main.go:141] libmachine: (addons-674449) Calling .DriverName
	I0908 13:36:14.070851 1121483 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 13:36:14.070907 1121483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 13:36:14.071010 1121483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45765
	I0908 13:36:14.071395 1121483 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0908 13:36:14.071416 1121483 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0908 13:36:14.071439 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHHostname
	I0908 13:36:14.071584 1121483 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0908 13:36:14.071584 1121483 main.go:141] libmachine: () Calling .GetVersion
	I0908 13:36:14.072166 1121483 main.go:141] libmachine: () Calling .GetMachineName
	I0908 13:36:14.073028 1121483 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.0
	I0908 13:36:14.074439 1121483 main.go:141] libmachine: Using API Version  1
	I0908 13:36:14.074732 1121483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 13:36:14.074738 1121483 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0908 13:36:14.074760 1121483 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0908 13:36:14.074798 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHHostname
	I0908 13:36:14.074859 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHPort
	I0908 13:36:14.074891 1121483 main.go:141] libmachine: (addons-674449) Calling .DriverName
	I0908 13:36:14.074944 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:36:14.074953 1121483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43071
	I0908 13:36:14.074965 1121483 main.go:141] libmachine: (addons-674449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:26:15", ip: ""} in network mk-addons-674449: {Iface:virbr1 ExpiryTime:2025-09-08 14:35:39 +0000 UTC Type:0 Mac:52:54:00:7c:26:15 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-674449 Clientid:01:52:54:00:7c:26:15}
	I0908 13:36:14.074983 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined IP address 192.168.39.135 and MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:36:14.075211 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHKeyPath
	I0908 13:36:14.075347 1121483 main.go:141] libmachine: () Calling .GetMachineName
	I0908 13:36:14.075403 1121483 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 13:36:14.075448 1121483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 13:36:14.075560 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHUsername
	I0908 13:36:14.075761 1121483 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/addons-674449/id_rsa Username:docker}
	I0908 13:36:14.076088 1121483 main.go:141] libmachine: () Calling .GetVersion
	I0908 13:36:14.076201 1121483 main.go:141] libmachine: (addons-674449) Calling .GetState
	I0908 13:36:14.077678 1121483 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0908 13:36:14.077852 1121483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37169
	I0908 13:36:14.078641 1121483 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0908 13:36:14.079315 1121483 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0908 13:36:14.079339 1121483 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0908 13:36:14.079369 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHHostname
	I0908 13:36:14.079474 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHPort
	I0908 13:36:14.079478 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:36:14.079562 1121483 main.go:141] libmachine: (addons-674449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:26:15", ip: ""} in network mk-addons-674449: {Iface:virbr1 ExpiryTime:2025-09-08 14:35:39 +0000 UTC Type:0 Mac:52:54:00:7c:26:15 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-674449 Clientid:01:52:54:00:7c:26:15}
	I0908 13:36:14.079582 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined IP address 192.168.39.135 and MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:36:14.079627 1121483 main.go:141] libmachine: Using API Version  1
	I0908 13:36:14.079757 1121483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 13:36:14.079885 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHKeyPath
	I0908 13:36:14.080157 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHUsername
	I0908 13:36:14.080373 1121483 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/addons-674449/id_rsa Username:docker}
	I0908 13:36:14.080513 1121483 main.go:141] libmachine: () Calling .GetVersion
	I0908 13:36:14.080801 1121483 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 13:36:14.080816 1121483 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0908 13:36:14.080846 1121483 main.go:141] libmachine: () Calling .GetMachineName
	I0908 13:36:14.080887 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHHostname
	I0908 13:36:14.081193 1121483 main.go:141] libmachine: (addons-674449) Calling .GetState
	I0908 13:36:14.085159 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:36:14.085126 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:36:14.085489 1121483 main.go:141] libmachine: (addons-674449) Calling .DriverName
	I0908 13:36:14.085867 1121483 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0908 13:36:14.085887 1121483 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0908 13:36:14.085911 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHHostname
	I0908 13:36:14.085972 1121483 main.go:141] libmachine: (addons-674449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:26:15", ip: ""} in network mk-addons-674449: {Iface:virbr1 ExpiryTime:2025-09-08 14:35:39 +0000 UTC Type:0 Mac:52:54:00:7c:26:15 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-674449 Clientid:01:52:54:00:7c:26:15}
	I0908 13:36:14.085985 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined IP address 192.168.39.135 and MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:36:14.086026 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHPort
	I0908 13:36:14.086299 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHKeyPath
	I0908 13:36:14.086380 1121483 main.go:141] libmachine: (addons-674449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:26:15", ip: ""} in network mk-addons-674449: {Iface:virbr1 ExpiryTime:2025-09-08 14:35:39 +0000 UTC Type:0 Mac:52:54:00:7c:26:15 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-674449 Clientid:01:52:54:00:7c:26:15}
	I0908 13:36:14.086398 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined IP address 192.168.39.135 and MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:36:14.086503 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHUsername
	I0908 13:36:14.086612 1121483 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/addons-674449/id_rsa Username:docker}
	I0908 13:36:14.086953 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHPort
	I0908 13:36:14.087223 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHKeyPath
	I0908 13:36:14.087515 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHUsername
	I0908 13:36:14.087718 1121483 main.go:141] libmachine: Using API Version  1
	I0908 13:36:14.087751 1121483 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/addons-674449/id_rsa Username:docker}
	I0908 13:36:14.087790 1121483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 13:36:14.088251 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:36:14.088276 1121483 main.go:141] libmachine: (addons-674449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:26:15", ip: ""} in network mk-addons-674449: {Iface:virbr1 ExpiryTime:2025-09-08 14:35:39 +0000 UTC Type:0 Mac:52:54:00:7c:26:15 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-674449 Clientid:01:52:54:00:7c:26:15}
	I0908 13:36:14.088318 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined IP address 192.168.39.135 and MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:36:14.088938 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHPort
	I0908 13:36:14.089271 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHKeyPath
	I0908 13:36:14.089573 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHUsername
	I0908 13:36:14.089645 1121483 main.go:141] libmachine: () Calling .GetMachineName
	I0908 13:36:14.089834 1121483 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/addons-674449/id_rsa Username:docker}
	I0908 13:36:14.090450 1121483 main.go:141] libmachine: (addons-674449) Calling .GetState
	I0908 13:36:14.091066 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:36:14.092412 1121483 main.go:141] libmachine: (addons-674449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:26:15", ip: ""} in network mk-addons-674449: {Iface:virbr1 ExpiryTime:2025-09-08 14:35:39 +0000 UTC Type:0 Mac:52:54:00:7c:26:15 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-674449 Clientid:01:52:54:00:7c:26:15}
	I0908 13:36:14.092550 1121483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33713
	I0908 13:36:14.092815 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHPort
	I0908 13:36:14.092630 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined IP address 192.168.39.135 and MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:36:14.092940 1121483 main.go:141] libmachine: (addons-674449) Calling .DriverName
	I0908 13:36:14.093060 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHKeyPath
	I0908 13:36:14.093343 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHUsername
	I0908 13:36:14.093580 1121483 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/addons-674449/id_rsa Username:docker}
	I0908 13:36:14.094684 1121483 main.go:141] libmachine: (addons-674449) Calling .DriverName
	I0908 13:36:14.094802 1121483 main.go:141] libmachine: () Calling .GetVersion
	I0908 13:36:14.095732 1121483 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0908 13:36:14.096079 1121483 main.go:141] libmachine: Using API Version  1
	I0908 13:36:14.096224 1121483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 13:36:14.096620 1121483 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0908 13:36:14.097647 1121483 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0908 13:36:14.097683 1121483 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0908 13:36:14.097720 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHHostname
	I0908 13:36:14.098846 1121483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36697
	I0908 13:36:14.098860 1121483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45091
	I0908 13:36:14.099400 1121483 main.go:141] libmachine: () Calling .GetMachineName
	I0908 13:36:14.099757 1121483 main.go:141] libmachine: (addons-674449) Calling .GetState
	I0908 13:36:14.099888 1121483 main.go:141] libmachine: () Calling .GetVersion
	I0908 13:36:14.100001 1121483 main.go:141] libmachine: () Calling .GetVersion
	I0908 13:36:14.100121 1121483 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0908 13:36:14.101238 1121483 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0908 13:36:14.101249 1121483 main.go:141] libmachine: Using API Version  1
	I0908 13:36:14.101277 1121483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 13:36:14.101979 1121483 main.go:141] libmachine: Using API Version  1
	I0908 13:36:14.102006 1121483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 13:36:14.102038 1121483 main.go:141] libmachine: () Calling .GetMachineName
	I0908 13:36:14.102460 1121483 main.go:141] libmachine: (addons-674449) Calling .DriverName
	I0908 13:36:14.102494 1121483 main.go:141] libmachine: (addons-674449) Calling .GetState
	I0908 13:36:14.102864 1121483 main.go:141] libmachine: () Calling .GetMachineName
	I0908 13:36:14.103129 1121483 main.go:141] libmachine: (addons-674449) Calling .GetState
	I0908 13:36:14.103199 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:36:14.104122 1121483 main.go:141] libmachine: (addons-674449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:26:15", ip: ""} in network mk-addons-674449: {Iface:virbr1 ExpiryTime:2025-09-08 14:35:39 +0000 UTC Type:0 Mac:52:54:00:7c:26:15 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-674449 Clientid:01:52:54:00:7c:26:15}
	I0908 13:36:14.104162 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined IP address 192.168.39.135 and MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:36:14.104168 1121483 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0908 13:36:14.104260 1121483 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0908 13:36:14.104458 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHPort
	I0908 13:36:14.104697 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHKeyPath
	I0908 13:36:14.104881 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHUsername
	I0908 13:36:14.105036 1121483 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/addons-674449/id_rsa Username:docker}
	I0908 13:36:14.105588 1121483 main.go:141] libmachine: (addons-674449) Calling .DriverName
	I0908 13:36:14.105673 1121483 main.go:141] libmachine: (addons-674449) Calling .DriverName
	I0908 13:36:14.107269 1121483 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0908 13:36:14.107282 1121483 out.go:179]   - Using image docker.io/busybox:stable
	I0908 13:36:14.107303 1121483 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0908 13:36:14.107284 1121483 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0908 13:36:14.108713 1121483 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0908 13:36:14.108755 1121483 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0908 13:36:14.108768 1121483 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0908 13:36:14.108771 1121483 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0908 13:36:14.108784 1121483 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0908 13:36:14.108787 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHHostname
	I0908 13:36:14.108718 1121483 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0908 13:36:14.108950 1121483 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0908 13:36:14.108965 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHHostname
	I0908 13:36:14.108808 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHHostname
	I0908 13:36:14.110593 1121483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42711
	I0908 13:36:14.111147 1121483 main.go:141] libmachine: () Calling .GetVersion
	I0908 13:36:14.111972 1121483 main.go:141] libmachine: Using API Version  1
	I0908 13:36:14.112004 1121483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 13:36:14.112011 1121483 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0908 13:36:14.112640 1121483 main.go:141] libmachine: () Calling .GetMachineName
	I0908 13:36:14.112913 1121483 main.go:141] libmachine: (addons-674449) Calling .GetState
	I0908 13:36:14.114033 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:36:14.114527 1121483 main.go:141] libmachine: (addons-674449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:26:15", ip: ""} in network mk-addons-674449: {Iface:virbr1 ExpiryTime:2025-09-08 14:35:39 +0000 UTC Type:0 Mac:52:54:00:7c:26:15 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-674449 Clientid:01:52:54:00:7c:26:15}
	I0908 13:36:14.114567 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined IP address 192.168.39.135 and MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:36:14.114856 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:36:14.114884 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:36:14.114984 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHPort
	I0908 13:36:14.115173 1121483 main.go:141] libmachine: (addons-674449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:26:15", ip: ""} in network mk-addons-674449: {Iface:virbr1 ExpiryTime:2025-09-08 14:35:39 +0000 UTC Type:0 Mac:52:54:00:7c:26:15 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-674449 Clientid:01:52:54:00:7c:26:15}
	I0908 13:36:14.115201 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined IP address 192.168.39.135 and MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:36:14.115276 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHKeyPath
	I0908 13:36:14.115258 1121483 main.go:141] libmachine: (addons-674449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:26:15", ip: ""} in network mk-addons-674449: {Iface:virbr1 ExpiryTime:2025-09-08 14:35:39 +0000 UTC Type:0 Mac:52:54:00:7c:26:15 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-674449 Clientid:01:52:54:00:7c:26:15}
	I0908 13:36:14.115310 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined IP address 192.168.39.135 and MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:36:14.115524 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHPort
	I0908 13:36:14.115552 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHPort
	I0908 13:36:14.115524 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHUsername
	I0908 13:36:14.115665 1121483 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0908 13:36:14.115808 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHKeyPath
	I0908 13:36:14.115856 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHKeyPath
	I0908 13:36:14.116019 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHUsername
	I0908 13:36:14.116026 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHUsername
	I0908 13:36:14.116035 1121483 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/addons-674449/id_rsa Username:docker}
	I0908 13:36:14.116175 1121483 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/addons-674449/id_rsa Username:docker}
	I0908 13:36:14.116639 1121483 main.go:141] libmachine: (addons-674449) Calling .DriverName
	I0908 13:36:14.116668 1121483 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/addons-674449/id_rsa Username:docker}
	I0908 13:36:14.118024 1121483 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0908 13:36:14.118060 1121483 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0908 13:36:14.118102 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHHostname
	I0908 13:36:14.118994 1121483 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0908 13:36:14.119919 1121483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35413
	I0908 13:36:14.120596 1121483 main.go:141] libmachine: () Calling .GetVersion
	I0908 13:36:14.121475 1121483 main.go:141] libmachine: Using API Version  1
	I0908 13:36:14.121503 1121483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 13:36:14.121907 1121483 main.go:141] libmachine: () Calling .GetMachineName
	I0908 13:36:14.121983 1121483 out.go:179]   - Using image docker.io/registry:3.0.0
	I0908 13:36:14.122142 1121483 main.go:141] libmachine: (addons-674449) Calling .GetState
	I0908 13:36:14.122549 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:36:14.123043 1121483 main.go:141] libmachine: (addons-674449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:26:15", ip: ""} in network mk-addons-674449: {Iface:virbr1 ExpiryTime:2025-09-08 14:35:39 +0000 UTC Type:0 Mac:52:54:00:7c:26:15 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-674449 Clientid:01:52:54:00:7c:26:15}
	I0908 13:36:14.123075 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined IP address 192.168.39.135 and MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:36:14.123216 1121483 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0908 13:36:14.123232 1121483 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0908 13:36:14.123255 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHHostname
	I0908 13:36:14.123357 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHPort
	I0908 13:36:14.123581 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHKeyPath
	I0908 13:36:14.123796 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHUsername
	I0908 13:36:14.123928 1121483 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/addons-674449/id_rsa Username:docker}
	I0908 13:36:14.124452 1121483 main.go:141] libmachine: (addons-674449) Calling .DriverName
	I0908 13:36:14.126023 1121483 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0908 13:36:14.127304 1121483 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0908 13:36:14.127335 1121483 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0908 13:36:14.127368 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHHostname
	I0908 13:36:14.127464 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:36:14.127998 1121483 main.go:141] libmachine: (addons-674449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:26:15", ip: ""} in network mk-addons-674449: {Iface:virbr1 ExpiryTime:2025-09-08 14:35:39 +0000 UTC Type:0 Mac:52:54:00:7c:26:15 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-674449 Clientid:01:52:54:00:7c:26:15}
	I0908 13:36:14.128026 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined IP address 192.168.39.135 and MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:36:14.128191 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHPort
	I0908 13:36:14.128416 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHKeyPath
	I0908 13:36:14.128593 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHUsername
	I0908 13:36:14.128736 1121483 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/addons-674449/id_rsa Username:docker}
	I0908 13:36:14.130692 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:36:14.131311 1121483 main.go:141] libmachine: (addons-674449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:26:15", ip: ""} in network mk-addons-674449: {Iface:virbr1 ExpiryTime:2025-09-08 14:35:39 +0000 UTC Type:0 Mac:52:54:00:7c:26:15 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-674449 Clientid:01:52:54:00:7c:26:15}
	I0908 13:36:14.131343 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined IP address 192.168.39.135 and MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:36:14.131527 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHPort
	I0908 13:36:14.131781 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHKeyPath
	I0908 13:36:14.131927 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHUsername
	I0908 13:36:14.132092 1121483 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/addons-674449/id_rsa Username:docker}
	I0908 13:36:14.963478 1121483 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.058770345s)
	I0908 13:36:14.963555 1121483 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.053759684s)
	I0908 13:36:14.963668 1121483 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 13:36:14.963688 1121483 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0908 13:36:15.122990 1121483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0908 13:36:15.249523 1121483 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0908 13:36:15.249567 1121483 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0908 13:36:15.380807 1121483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0908 13:36:15.438792 1121483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0908 13:36:15.442808 1121483 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0908 13:36:15.442837 1121483 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0908 13:36:15.444868 1121483 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0908 13:36:15.444902 1121483 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0908 13:36:15.454385 1121483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0908 13:36:15.615567 1121483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0908 13:36:15.696631 1121483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0908 13:36:15.699136 1121483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 13:36:15.755742 1121483 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0908 13:36:15.755785 1121483 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0908 13:36:15.767801 1121483 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0908 13:36:15.767830 1121483 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0908 13:36:15.920526 1121483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0908 13:36:15.963983 1121483 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0908 13:36:15.964035 1121483 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0908 13:36:16.091018 1121483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0908 13:36:16.123674 1121483 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0908 13:36:16.123715 1121483 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0908 13:36:16.149249 1121483 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0908 13:36:16.149295 1121483 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0908 13:36:16.243977 1121483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 13:36:16.312594 1121483 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0908 13:36:16.312638 1121483 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0908 13:36:16.328839 1121483 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0908 13:36:16.328881 1121483 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0908 13:36:16.408890 1121483 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0908 13:36:16.408934 1121483 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0908 13:36:16.485440 1121483 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0908 13:36:16.485563 1121483 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0908 13:36:16.535214 1121483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0908 13:36:16.610016 1121483 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0908 13:36:16.610076 1121483 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0908 13:36:16.613956 1121483 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0908 13:36:16.613990 1121483 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0908 13:36:16.707943 1121483 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0908 13:36:16.707986 1121483 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0908 13:36:16.920214 1121483 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0908 13:36:16.920256 1121483 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0908 13:36:16.978439 1121483 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0908 13:36:16.978472 1121483 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0908 13:36:17.141871 1121483 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0908 13:36:17.141916 1121483 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0908 13:36:17.143071 1121483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0908 13:36:17.239315 1121483 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0908 13:36:17.239347 1121483 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0908 13:36:17.388948 1121483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0908 13:36:17.551780 1121483 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0908 13:36:17.551812 1121483 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0908 13:36:17.747532 1121483 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0908 13:36:17.747563 1121483 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0908 13:36:18.250230 1121483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0908 13:36:18.412968 1121483 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0908 13:36:18.413004 1121483 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0908 13:36:18.593650 1121483 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.629905723s)
	I0908 13:36:18.593706 1121483 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0908 13:36:18.593670 1121483 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.629968888s)
	I0908 13:36:18.594500 1121483 node_ready.go:35] waiting up to 6m0s for node "addons-674449" to be "Ready" ...
	I0908 13:36:18.600986 1121483 node_ready.go:49] node "addons-674449" is "Ready"
	I0908 13:36:18.601024 1121483 node_ready.go:38] duration metric: took 6.499153ms for node "addons-674449" to be "Ready" ...
	I0908 13:36:18.601043 1121483 api_server.go:52] waiting for apiserver process to appear ...
	I0908 13:36:18.601099 1121483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 13:36:18.929478 1121483 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0908 13:36:18.929517 1121483 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0908 13:36:19.099237 1121483 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-674449" context rescaled to 1 replicas
	I0908 13:36:19.361738 1121483 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0908 13:36:19.361771 1121483 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0908 13:36:19.386660 1121483 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.26361423s)
	I0908 13:36:19.386735 1121483 main.go:141] libmachine: Making call to close driver server
	I0908 13:36:19.386757 1121483 main.go:141] libmachine: (addons-674449) Calling .Close
	I0908 13:36:19.387156 1121483 main.go:141] libmachine: Successfully made call to close driver server
	I0908 13:36:19.387183 1121483 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 13:36:19.387195 1121483 main.go:141] libmachine: Making call to close driver server
	I0908 13:36:19.387207 1121483 main.go:141] libmachine: (addons-674449) Calling .Close
	I0908 13:36:19.387233 1121483 main.go:141] libmachine: (addons-674449) DBG | Closing plugin on server side
	I0908 13:36:19.387526 1121483 main.go:141] libmachine: Successfully made call to close driver server
	I0908 13:36:19.387543 1121483 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 13:36:19.675842 1121483 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0908 13:36:19.675879 1121483 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0908 13:36:20.366376 1121483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0908 13:36:21.494952 1121483 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0908 13:36:21.495035 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHHostname
	I0908 13:36:21.498499 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:36:21.499003 1121483 main.go:141] libmachine: (addons-674449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:26:15", ip: ""} in network mk-addons-674449: {Iface:virbr1 ExpiryTime:2025-09-08 14:35:39 +0000 UTC Type:0 Mac:52:54:00:7c:26:15 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-674449 Clientid:01:52:54:00:7c:26:15}
	I0908 13:36:21.499033 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined IP address 192.168.39.135 and MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:36:21.499258 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHPort
	I0908 13:36:21.499497 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHKeyPath
	I0908 13:36:21.499666 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHUsername
	I0908 13:36:21.499836 1121483 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/addons-674449/id_rsa Username:docker}
	I0908 13:36:22.333430 1121483 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0908 13:36:22.770624 1121483 addons.go:238] Setting addon gcp-auth=true in "addons-674449"
	I0908 13:36:22.770704 1121483 host.go:66] Checking if "addons-674449" exists ...
	I0908 13:36:22.771080 1121483 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 13:36:22.771122 1121483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 13:36:22.788357 1121483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46131
	I0908 13:36:22.788955 1121483 main.go:141] libmachine: () Calling .GetVersion
	I0908 13:36:22.789592 1121483 main.go:141] libmachine: Using API Version  1
	I0908 13:36:22.789617 1121483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 13:36:22.789965 1121483 main.go:141] libmachine: () Calling .GetMachineName
	I0908 13:36:22.790476 1121483 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 13:36:22.790512 1121483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 13:36:22.807729 1121483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33581
	I0908 13:36:22.808284 1121483 main.go:141] libmachine: () Calling .GetVersion
	I0908 13:36:22.808887 1121483 main.go:141] libmachine: Using API Version  1
	I0908 13:36:22.808904 1121483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 13:36:22.809296 1121483 main.go:141] libmachine: () Calling .GetMachineName
	I0908 13:36:22.809553 1121483 main.go:141] libmachine: (addons-674449) Calling .GetState
	I0908 13:36:22.811526 1121483 main.go:141] libmachine: (addons-674449) Calling .DriverName
	I0908 13:36:22.811846 1121483 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0908 13:36:22.811884 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHHostname
	I0908 13:36:22.814692 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:36:22.815185 1121483 main.go:141] libmachine: (addons-674449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:26:15", ip: ""} in network mk-addons-674449: {Iface:virbr1 ExpiryTime:2025-09-08 14:35:39 +0000 UTC Type:0 Mac:52:54:00:7c:26:15 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-674449 Clientid:01:52:54:00:7c:26:15}
	I0908 13:36:22.815221 1121483 main.go:141] libmachine: (addons-674449) DBG | domain addons-674449 has defined IP address 192.168.39.135 and MAC address 52:54:00:7c:26:15 in network mk-addons-674449
	I0908 13:36:22.815457 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHPort
	I0908 13:36:22.815744 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHKeyPath
	I0908 13:36:22.816118 1121483 main.go:141] libmachine: (addons-674449) Calling .GetSSHUsername
	I0908 13:36:22.816341 1121483 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/addons-674449/id_rsa Username:docker}
	I0908 13:36:24.994988 1121483 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.614120293s)
	I0908 13:36:24.995036 1121483 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (9.556200931s)
	I0908 13:36:24.995060 1121483 main.go:141] libmachine: Making call to close driver server
	I0908 13:36:24.995075 1121483 main.go:141] libmachine: (addons-674449) Calling .Close
	I0908 13:36:24.995088 1121483 main.go:141] libmachine: Making call to close driver server
	I0908 13:36:24.995086 1121483 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.540663558s)
	I0908 13:36:24.995107 1121483 main.go:141] libmachine: (addons-674449) Calling .Close
	I0908 13:36:24.995122 1121483 main.go:141] libmachine: Making call to close driver server
	I0908 13:36:24.995137 1121483 main.go:141] libmachine: (addons-674449) Calling .Close
	I0908 13:36:24.995193 1121483 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (9.379591634s)
	I0908 13:36:24.995224 1121483 main.go:141] libmachine: Making call to close driver server
	I0908 13:36:24.995236 1121483 main.go:141] libmachine: (addons-674449) Calling .Close
	I0908 13:36:24.995273 1121483 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.296102538s)
	I0908 13:36:24.995323 1121483 main.go:141] libmachine: Making call to close driver server
	I0908 13:36:24.995324 1121483 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (9.298655613s)
	I0908 13:36:24.995335 1121483 main.go:141] libmachine: (addons-674449) Calling .Close
	I0908 13:36:24.995363 1121483 main.go:141] libmachine: Making call to close driver server
	I0908 13:36:24.995373 1121483 main.go:141] libmachine: (addons-674449) Calling .Close
	I0908 13:36:24.995436 1121483 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (9.074883118s)
	I0908 13:36:24.995458 1121483 main.go:141] libmachine: Making call to close driver server
	I0908 13:36:24.995469 1121483 main.go:141] libmachine: (addons-674449) Calling .Close
	I0908 13:36:24.995505 1121483 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.904455161s)
	I0908 13:36:24.995556 1121483 main.go:141] libmachine: Making call to close driver server
	I0908 13:36:24.995566 1121483 main.go:141] libmachine: (addons-674449) Calling .Close
	I0908 13:36:24.995591 1121483 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (8.751576793s)
	W0908 13:36:24.995617 1121483 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:36:24.995688 1121483 main.go:141] libmachine: (addons-674449) DBG | Closing plugin on server side
	I0908 13:36:24.995688 1121483 retry.go:31] will retry after 359.211915ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:36:24.995719 1121483 main.go:141] libmachine: (addons-674449) DBG | Closing plugin on server side
	I0908 13:36:24.995751 1121483 main.go:141] libmachine: Successfully made call to close driver server
	I0908 13:36:24.995757 1121483 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 13:36:24.995761 1121483 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.460515546s)
	I0908 13:36:24.995785 1121483 main.go:141] libmachine: Successfully made call to close driver server
	I0908 13:36:24.995786 1121483 main.go:141] libmachine: Making call to close driver server
	I0908 13:36:24.995794 1121483 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 13:36:24.995799 1121483 main.go:141] libmachine: (addons-674449) Calling .Close
	I0908 13:36:24.995801 1121483 main.go:141] libmachine: Making call to close driver server
	I0908 13:36:24.995808 1121483 main.go:141] libmachine: (addons-674449) Calling .Close
	I0908 13:36:24.995765 1121483 main.go:141] libmachine: Making call to close driver server
	I0908 13:36:24.995862 1121483 main.go:141] libmachine: (addons-674449) Calling .Close
	I0908 13:36:24.995867 1121483 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.852754774s)
	I0908 13:36:24.995893 1121483 main.go:141] libmachine: (addons-674449) DBG | Closing plugin on server side
	I0908 13:36:24.995896 1121483 main.go:141] libmachine: Making call to close driver server
	I0908 13:36:24.995906 1121483 main.go:141] libmachine: (addons-674449) Calling .Close
	I0908 13:36:24.995920 1121483 main.go:141] libmachine: Successfully made call to close driver server
	I0908 13:36:24.995922 1121483 main.go:141] libmachine: Successfully made call to close driver server
	I0908 13:36:24.995927 1121483 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 13:36:24.995931 1121483 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 13:36:24.995935 1121483 main.go:141] libmachine: Making call to close driver server
	I0908 13:36:24.995941 1121483 main.go:141] libmachine: Making call to close driver server
	I0908 13:36:24.995949 1121483 main.go:141] libmachine: (addons-674449) Calling .Close
	I0908 13:36:24.995983 1121483 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.606999942s)
	I0908 13:36:24.995942 1121483 main.go:141] libmachine: (addons-674449) Calling .Close
	I0908 13:36:24.996010 1121483 main.go:141] libmachine: Making call to close driver server
	I0908 13:36:24.996019 1121483 main.go:141] libmachine: (addons-674449) Calling .Close
	I0908 13:36:24.996113 1121483 main.go:141] libmachine: Successfully made call to close driver server
	I0908 13:36:24.996120 1121483 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 13:36:24.996130 1121483 addons.go:479] Verifying addon ingress=true in "addons-674449"
	I0908 13:36:24.996778 1121483 main.go:141] libmachine: (addons-674449) DBG | Closing plugin on server side
	I0908 13:36:24.996818 1121483 main.go:141] libmachine: (addons-674449) DBG | Closing plugin on server side
	I0908 13:36:24.996858 1121483 main.go:141] libmachine: Successfully made call to close driver server
	I0908 13:36:24.996882 1121483 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 13:36:24.996900 1121483 main.go:141] libmachine: Making call to close driver server
	I0908 13:36:24.996917 1121483 main.go:141] libmachine: (addons-674449) Calling .Close
	I0908 13:36:24.997199 1121483 main.go:141] libmachine: Successfully made call to close driver server
	I0908 13:36:24.997233 1121483 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 13:36:24.997250 1121483 main.go:141] libmachine: Making call to close driver server
	I0908 13:36:24.997266 1121483 main.go:141] libmachine: (addons-674449) Calling .Close
	I0908 13:36:24.998309 1121483 main.go:141] libmachine: (addons-674449) DBG | Closing plugin on server side
	I0908 13:36:24.998365 1121483 main.go:141] libmachine: Successfully made call to close driver server
	I0908 13:36:24.998371 1121483 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 13:36:25.000613 1121483 main.go:141] libmachine: (addons-674449) DBG | Closing plugin on server side
	I0908 13:36:25.000692 1121483 main.go:141] libmachine: (addons-674449) DBG | Closing plugin on server side
	I0908 13:36:25.000858 1121483 main.go:141] libmachine: Successfully made call to close driver server
	I0908 13:36:25.000901 1121483 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 13:36:25.000922 1121483 main.go:141] libmachine: Making call to close driver server
	I0908 13:36:25.000942 1121483 main.go:141] libmachine: (addons-674449) Calling .Close
	I0908 13:36:25.001407 1121483 main.go:141] libmachine: (addons-674449) DBG | Closing plugin on server side
	I0908 13:36:25.001460 1121483 main.go:141] libmachine: Successfully made call to close driver server
	I0908 13:36:25.001484 1121483 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 13:36:25.001530 1121483 main.go:141] libmachine: Successfully made call to close driver server
	I0908 13:36:25.001543 1121483 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 13:36:25.001554 1121483 main.go:141] libmachine: Making call to close driver server
	I0908 13:36:25.001563 1121483 main.go:141] libmachine: (addons-674449) Calling .Close
	I0908 13:36:25.001643 1121483 main.go:141] libmachine: (addons-674449) DBG | Closing plugin on server side
	I0908 13:36:25.001695 1121483 main.go:141] libmachine: (addons-674449) DBG | Closing plugin on server side
	I0908 13:36:25.001722 1121483 main.go:141] libmachine: Successfully made call to close driver server
	I0908 13:36:25.001729 1121483 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 13:36:25.001741 1121483 addons.go:479] Verifying addon registry=true in "addons-674449"
	I0908 13:36:25.002058 1121483 main.go:141] libmachine: (addons-674449) DBG | Closing plugin on server side
	I0908 13:36:25.002095 1121483 main.go:141] libmachine: (addons-674449) DBG | Closing plugin on server side
	I0908 13:36:25.002132 1121483 main.go:141] libmachine: Successfully made call to close driver server
	I0908 13:36:25.002156 1121483 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 13:36:25.002605 1121483 main.go:141] libmachine: Successfully made call to close driver server
	I0908 13:36:25.002616 1121483 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 13:36:25.002626 1121483 main.go:141] libmachine: Making call to close driver server
	I0908 13:36:25.002634 1121483 main.go:141] libmachine: (addons-674449) Calling .Close
	I0908 13:36:25.003057 1121483 out.go:179] * Verifying ingress addon...
	I0908 13:36:25.003230 1121483 main.go:141] libmachine: (addons-674449) DBG | Closing plugin on server side
	I0908 13:36:25.003270 1121483 main.go:141] libmachine: (addons-674449) DBG | Closing plugin on server side
	I0908 13:36:25.003280 1121483 main.go:141] libmachine: (addons-674449) DBG | Closing plugin on server side
	I0908 13:36:25.003296 1121483 main.go:141] libmachine: (addons-674449) DBG | Closing plugin on server side
	I0908 13:36:25.003303 1121483 main.go:141] libmachine: Successfully made call to close driver server
	I0908 13:36:25.003309 1121483 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 13:36:25.003329 1121483 main.go:141] libmachine: Successfully made call to close driver server
	I0908 13:36:25.003345 1121483 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 13:36:25.003363 1121483 main.go:141] libmachine: (addons-674449) DBG | Closing plugin on server side
	I0908 13:36:25.003401 1121483 main.go:141] libmachine: Successfully made call to close driver server
	I0908 13:36:25.003411 1121483 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 13:36:25.003420 1121483 main.go:141] libmachine: Making call to close driver server
	I0908 13:36:25.003428 1121483 main.go:141] libmachine: (addons-674449) Calling .Close
	I0908 13:36:25.003498 1121483 main.go:141] libmachine: Successfully made call to close driver server
	I0908 13:36:25.003513 1121483 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 13:36:25.003521 1121483 main.go:141] libmachine: Making call to close driver server
	I0908 13:36:25.003521 1121483 main.go:141] libmachine: Successfully made call to close driver server
	I0908 13:36:25.003536 1121483 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 13:36:25.003539 1121483 main.go:141] libmachine: (addons-674449) Calling .Close
	I0908 13:36:25.003708 1121483 main.go:141] libmachine: Successfully made call to close driver server
	I0908 13:36:25.004430 1121483 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 13:36:25.003805 1121483 main.go:141] libmachine: Successfully made call to close driver server
	I0908 13:36:25.004495 1121483 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 13:36:25.003830 1121483 main.go:141] libmachine: (addons-674449) DBG | Closing plugin on server side
	I0908 13:36:25.004756 1121483 main.go:141] libmachine: (addons-674449) DBG | Closing plugin on server side
	I0908 13:36:25.004870 1121483 out.go:179] * Verifying registry addon...
	I0908 13:36:25.005678 1121483 main.go:141] libmachine: Successfully made call to close driver server
	I0908 13:36:25.005702 1121483 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 13:36:25.005722 1121483 addons.go:479] Verifying addon metrics-server=true in "addons-674449"
	I0908 13:36:25.006123 1121483 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0908 13:36:25.006668 1121483 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-674449 service yakd-dashboard -n yakd-dashboard
	
	I0908 13:36:25.007737 1121483 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0908 13:36:25.136698 1121483 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0908 13:36:25.136739 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:25.136745 1121483 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0908 13:36:25.136768 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:25.140893 1121483 main.go:141] libmachine: Making call to close driver server
	I0908 13:36:25.140937 1121483 main.go:141] libmachine: (addons-674449) Calling .Close
	I0908 13:36:25.141347 1121483 main.go:141] libmachine: Successfully made call to close driver server
	I0908 13:36:25.141370 1121483 main.go:141] libmachine: Making call to close connection to plugin binary
	W0908 13:36:25.141498 1121483 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0908 13:36:25.166465 1121483 main.go:141] libmachine: Making call to close driver server
	I0908 13:36:25.166502 1121483 main.go:141] libmachine: (addons-674449) Calling .Close
	I0908 13:36:25.166882 1121483 main.go:141] libmachine: Successfully made call to close driver server
	I0908 13:36:25.166906 1121483 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 13:36:25.282322 1121483 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (6.681193951s)
	I0908 13:36:25.282346 1121483 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.032059037s)
	I0908 13:36:25.282375 1121483 api_server.go:72] duration metric: took 11.3776543s to wait for apiserver process to appear ...
	I0908 13:36:25.282384 1121483 api_server.go:88] waiting for apiserver healthz status ...
	I0908 13:36:25.282406 1121483 api_server.go:253] Checking apiserver healthz at https://192.168.39.135:8443/healthz ...
	W0908 13:36:25.282409 1121483 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0908 13:36:25.282440 1121483 retry.go:31] will retry after 293.291228ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0908 13:36:25.309259 1121483 api_server.go:279] https://192.168.39.135:8443/healthz returned 200:
	ok
	I0908 13:36:25.331506 1121483 api_server.go:141] control plane version: v1.34.0
	I0908 13:36:25.331577 1121483 api_server.go:131] duration metric: took 49.168746ms to wait for apiserver health ...
	I0908 13:36:25.331591 1121483 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 13:36:25.355691 1121483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 13:36:25.373082 1121483 system_pods.go:59] 16 kube-system pods found
	I0908 13:36:25.373142 1121483 system_pods.go:61] "amd-gpu-device-plugin-kzxl5" [b3f56fe0-40a6-4ffe-b7de-7663f12a383a] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0908 13:36:25.373155 1121483 system_pods.go:61] "coredns-66bc5c9577-f87jd" [5e29f680-846f-4be6-a681-48cda8d28f05] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 13:36:25.373164 1121483 system_pods.go:61] "coredns-66bc5c9577-jp2nv" [62f949b8-763b-4d38-bd0b-435148046042] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 13:36:25.373171 1121483 system_pods.go:61] "etcd-addons-674449" [0d29c164-6411-4b7d-af98-8bda6f7b6e1c] Running
	I0908 13:36:25.373187 1121483 system_pods.go:61] "kube-apiserver-addons-674449" [2cab10f6-ff9f-407e-88bd-d596734cb66a] Running
	I0908 13:36:25.373192 1121483 system_pods.go:61] "kube-controller-manager-addons-674449" [f8e03448-5e83-45b9-9e70-56f13512d618] Running
	I0908 13:36:25.373200 1121483 system_pods.go:61] "kube-ingress-dns-minikube" [ab8c49c1-9369-40b5-88b2-c345b39fc2a1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0908 13:36:25.373205 1121483 system_pods.go:61] "kube-proxy-qr6fm" [e15a8522-5091-4bff-b63d-188d2ecb8629] Running
	I0908 13:36:25.373211 1121483 system_pods.go:61] "kube-scheduler-addons-674449" [d73cad60-6a07-4ae7-b10b-cd2fa3a80ac8] Running
	I0908 13:36:25.373220 1121483 system_pods.go:61] "metrics-server-85b7d694d7-vpjdn" [bfb3b498-93a0-4972-8c87-5ed48139b3d8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 13:36:25.373229 1121483 system_pods.go:61] "nvidia-device-plugin-daemonset-n676m" [fd89881d-3311-4bbe-bd0e-8609f7c85713] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0908 13:36:25.373237 1121483 system_pods.go:61] "registry-66898fdd98-gc8hq" [a413903f-bf54-4cd9-a1c0-7a955a711b5d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0908 13:36:25.373257 1121483 system_pods.go:61] "registry-creds-764b6fb674-27jrs" [4fb49d50-47e7-47b7-81b3-648a2981c7e6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0908 13:36:25.373271 1121483 system_pods.go:61] "registry-proxy-7ngm4" [d9cfc107-6d8d-4cc3-9a3b-165b7418c9a1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0908 13:36:25.373276 1121483 system_pods.go:61] "snapshot-controller-7d9fbc56b8-vbz48" [8cda6e0f-09d5-4a89-8845-66d4b70ef2d0] Pending
	I0908 13:36:25.373283 1121483 system_pods.go:61] "storage-provisioner" [03f506b1-95ba-4374-a259-a48aff54cbf3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 13:36:25.373291 1121483 system_pods.go:74] duration metric: took 41.692691ms to wait for pod list to return data ...
	I0908 13:36:25.373310 1121483 default_sa.go:34] waiting for default service account to be created ...
	I0908 13:36:25.398968 1121483 default_sa.go:45] found service account: "default"
	I0908 13:36:25.398998 1121483 default_sa.go:55] duration metric: took 25.680457ms for default service account to be created ...
	I0908 13:36:25.399010 1121483 system_pods.go:116] waiting for k8s-apps to be running ...
	I0908 13:36:25.450961 1121483 system_pods.go:86] 17 kube-system pods found
	I0908 13:36:25.450999 1121483 system_pods.go:89] "amd-gpu-device-plugin-kzxl5" [b3f56fe0-40a6-4ffe-b7de-7663f12a383a] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0908 13:36:25.451008 1121483 system_pods.go:89] "coredns-66bc5c9577-f87jd" [5e29f680-846f-4be6-a681-48cda8d28f05] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 13:36:25.451017 1121483 system_pods.go:89] "coredns-66bc5c9577-jp2nv" [62f949b8-763b-4d38-bd0b-435148046042] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 13:36:25.451023 1121483 system_pods.go:89] "etcd-addons-674449" [0d29c164-6411-4b7d-af98-8bda6f7b6e1c] Running
	I0908 13:36:25.451027 1121483 system_pods.go:89] "kube-apiserver-addons-674449" [2cab10f6-ff9f-407e-88bd-d596734cb66a] Running
	I0908 13:36:25.451031 1121483 system_pods.go:89] "kube-controller-manager-addons-674449" [f8e03448-5e83-45b9-9e70-56f13512d618] Running
	I0908 13:36:25.451037 1121483 system_pods.go:89] "kube-ingress-dns-minikube" [ab8c49c1-9369-40b5-88b2-c345b39fc2a1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0908 13:36:25.451041 1121483 system_pods.go:89] "kube-proxy-qr6fm" [e15a8522-5091-4bff-b63d-188d2ecb8629] Running
	I0908 13:36:25.451044 1121483 system_pods.go:89] "kube-scheduler-addons-674449" [d73cad60-6a07-4ae7-b10b-cd2fa3a80ac8] Running
	I0908 13:36:25.451049 1121483 system_pods.go:89] "metrics-server-85b7d694d7-vpjdn" [bfb3b498-93a0-4972-8c87-5ed48139b3d8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 13:36:25.451055 1121483 system_pods.go:89] "nvidia-device-plugin-daemonset-n676m" [fd89881d-3311-4bbe-bd0e-8609f7c85713] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0908 13:36:25.451060 1121483 system_pods.go:89] "registry-66898fdd98-gc8hq" [a413903f-bf54-4cd9-a1c0-7a955a711b5d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0908 13:36:25.451066 1121483 system_pods.go:89] "registry-creds-764b6fb674-27jrs" [4fb49d50-47e7-47b7-81b3-648a2981c7e6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0908 13:36:25.451072 1121483 system_pods.go:89] "registry-proxy-7ngm4" [d9cfc107-6d8d-4cc3-9a3b-165b7418c9a1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0908 13:36:25.451080 1121483 system_pods.go:89] "snapshot-controller-7d9fbc56b8-vbz48" [8cda6e0f-09d5-4a89-8845-66d4b70ef2d0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 13:36:25.451085 1121483 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wnqqg" [798c0151-ba33-41b9-9b67-27fb93f187e1] Pending
	I0908 13:36:25.451091 1121483 system_pods.go:89] "storage-provisioner" [03f506b1-95ba-4374-a259-a48aff54cbf3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 13:36:25.451102 1121483 system_pods.go:126] duration metric: took 52.085087ms to wait for k8s-apps to be running ...
	I0908 13:36:25.451114 1121483 system_svc.go:44] waiting for kubelet service to be running ....
	I0908 13:36:25.451165 1121483 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 13:36:25.523421 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:25.550262 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:25.576678 1121483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0908 13:36:26.030817 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:26.030890 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:26.564813 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:26.565522 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:26.755554 1121483 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.389105347s)
	I0908 13:36:26.755603 1121483 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.943727133s)
	I0908 13:36:26.755624 1121483 main.go:141] libmachine: Making call to close driver server
	I0908 13:36:26.755639 1121483 main.go:141] libmachine: (addons-674449) Calling .Close
	I0908 13:36:26.755961 1121483 main.go:141] libmachine: (addons-674449) DBG | Closing plugin on server side
	I0908 13:36:26.756116 1121483 main.go:141] libmachine: Successfully made call to close driver server
	I0908 13:36:26.756137 1121483 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 13:36:26.756148 1121483 main.go:141] libmachine: Making call to close driver server
	I0908 13:36:26.756160 1121483 main.go:141] libmachine: (addons-674449) Calling .Close
	I0908 13:36:26.756425 1121483 main.go:141] libmachine: Successfully made call to close driver server
	I0908 13:36:26.756483 1121483 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 13:36:26.756500 1121483 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-674449"
	I0908 13:36:26.757786 1121483 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0908 13:36:26.757848 1121483 out.go:179] * Verifying csi-hostpath-driver addon...
	I0908 13:36:26.759823 1121483 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0908 13:36:26.760536 1121483 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0908 13:36:26.761013 1121483 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0908 13:36:26.761038 1121483 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0908 13:36:26.812496 1121483 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0908 13:36:26.812523 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:27.035818 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:27.037165 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:27.129439 1121483 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0908 13:36:27.129473 1121483 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0908 13:36:27.263433 1121483 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0908 13:36:27.263462 1121483 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0908 13:36:27.270419 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:27.378304 1121483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0908 13:36:27.516220 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:27.517670 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:27.772568 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:28.019697 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:28.023121 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:28.269847 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:28.513649 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:28.518464 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:28.768505 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:29.018217 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:29.022607 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:29.145658 1121483 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.789912214s)
	W0908 13:36:29.145717 1121483 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:36:29.145742 1121483 retry.go:31] will retry after 490.866174ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:36:29.145677 1121483 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (3.694486104s)
	I0908 13:36:29.145765 1121483 system_svc.go:56] duration metric: took 3.694645995s WaitForService to wait for kubelet
	I0908 13:36:29.145778 1121483 kubeadm.go:578] duration metric: took 15.241058397s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 13:36:29.145807 1121483 node_conditions.go:102] verifying NodePressure condition ...
	I0908 13:36:29.145807 1121483 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.569074767s)
	I0908 13:36:29.145923 1121483 main.go:141] libmachine: Making call to close driver server
	I0908 13:36:29.145941 1121483 main.go:141] libmachine: (addons-674449) Calling .Close
	I0908 13:36:29.146378 1121483 main.go:141] libmachine: Successfully made call to close driver server
	I0908 13:36:29.146442 1121483 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 13:36:29.146458 1121483 main.go:141] libmachine: Making call to close driver server
	I0908 13:36:29.146468 1121483 main.go:141] libmachine: (addons-674449) Calling .Close
	I0908 13:36:29.146424 1121483 main.go:141] libmachine: (addons-674449) DBG | Closing plugin on server side
	I0908 13:36:29.146759 1121483 main.go:141] libmachine: Successfully made call to close driver server
	I0908 13:36:29.146769 1121483 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 13:36:29.153501 1121483 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0908 13:36:29.153540 1121483 node_conditions.go:123] node cpu capacity is 2
	I0908 13:36:29.153555 1121483 node_conditions.go:105] duration metric: took 7.742931ms to run NodePressure ...
	I0908 13:36:29.153569 1121483 start.go:241] waiting for startup goroutines ...
	I0908 13:36:29.273682 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:29.525285 1121483 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.146929078s)
	I0908 13:36:29.525395 1121483 main.go:141] libmachine: Making call to close driver server
	I0908 13:36:29.525413 1121483 main.go:141] libmachine: (addons-674449) Calling .Close
	I0908 13:36:29.525812 1121483 main.go:141] libmachine: (addons-674449) DBG | Closing plugin on server side
	I0908 13:36:29.525851 1121483 main.go:141] libmachine: Successfully made call to close driver server
	I0908 13:36:29.525864 1121483 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 13:36:29.525872 1121483 main.go:141] libmachine: Making call to close driver server
	I0908 13:36:29.525895 1121483 main.go:141] libmachine: (addons-674449) Calling .Close
	I0908 13:36:29.526217 1121483 main.go:141] libmachine: Successfully made call to close driver server
	I0908 13:36:29.526237 1121483 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 13:36:29.526259 1121483 main.go:141] libmachine: (addons-674449) DBG | Closing plugin on server side
	I0908 13:36:29.527275 1121483 addons.go:479] Verifying addon gcp-auth=true in "addons-674449"
	I0908 13:36:29.529292 1121483 out.go:179] * Verifying gcp-auth addon...
	I0908 13:36:29.531004 1121483 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0908 13:36:29.535416 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:29.539977 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:29.621453 1121483 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0908 13:36:29.621476 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:36:29.637659 1121483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 13:36:29.769360 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:30.015670 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:30.018712 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:30.039761 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:36:30.272767 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:30.518256 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:30.518462 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:30.538372 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:36:30.767921 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:31.038685 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:31.039069 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:31.045457 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:36:31.268318 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:31.378685 1121483 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.740964896s)
	W0908 13:36:31.378740 1121483 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:36:31.378770 1121483 retry.go:31] will retry after 389.223348ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:36:31.513416 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:31.514241 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:31.538307 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:36:31.768326 1121483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 13:36:31.771161 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:32.017603 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:32.020476 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:32.038503 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:36:32.270463 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:32.515792 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:32.519764 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:32.538142 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:36:32.767644 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:33.023490 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:33.025236 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:33.034165 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:36:33.143943 1121483 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.375563709s)
	W0908 13:36:33.144033 1121483 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:36:33.144064 1121483 retry.go:31] will retry after 504.95525ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:36:33.267750 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:33.513868 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:33.517456 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:33.537046 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:36:33.650308 1121483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 13:36:33.766261 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:34.015171 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:34.016963 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:34.037895 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:36:34.267109 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:34.518223 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:34.526000 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:34.544797 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:36:34.767528 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:35.006066 1121483 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.355702832s)
	W0908 13:36:35.006123 1121483 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:36:35.006150 1121483 retry.go:31] will retry after 746.370009ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:36:35.018405 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:35.018565 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:35.036078 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:36:35.264784 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:35.510392 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:35.515865 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:35.537347 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:36:35.753544 1121483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 13:36:35.767722 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:36.014296 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:36.015478 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:36.038192 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:36:36.267735 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:36.515408 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:36.517080 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:36.537220 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:36:36.769825 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:37.019197 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:37.024546 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:37.035987 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:36:37.061218 1121483 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.307624726s)
	W0908 13:36:37.061282 1121483 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:36:37.061312 1121483 retry.go:31] will retry after 2.768950452s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:36:37.270807 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:37.517720 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:37.517881 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:37.540690 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:36:37.768704 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:38.012998 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:38.014432 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:38.037539 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:36:38.264681 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:38.512167 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:38.515643 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:38.535938 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:36:38.765532 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:39.020069 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:39.020469 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:39.036639 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:36:39.268020 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:39.510714 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:39.513305 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:39.535094 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:36:39.765568 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:39.831505 1121483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 13:36:40.018404 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:40.019037 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:40.045391 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:36:40.269145 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:40.517847 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:40.518053 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:40.537492 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:36:40.765863 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:40.853679 1121483 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.022125936s)
	W0908 13:36:40.853793 1121483 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:36:40.853846 1121483 retry.go:31] will retry after 1.650720069s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:36:41.013283 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:41.017551 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:41.034010 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:36:41.267780 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:41.511004 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:41.513672 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:41.535544 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:36:41.767744 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:42.011208 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:42.016729 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:42.035279 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:36:42.493963 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:42.504995 1121483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 13:36:42.514966 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:42.515275 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:42.537383 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:36:42.766196 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:43.009874 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:43.015943 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:43.040890 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:36:43.266776 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:43.509549 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:43.512699 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:43.536789 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:36:43.635727 1121483 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.130682628s)
	W0908 13:36:43.635783 1121483 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:36:43.635808 1121483 retry.go:31] will retry after 4.919282951s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:36:43.764960 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:44.014592 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:44.014602 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:44.034549 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:36:44.268372 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:44.509164 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:44.631070 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:44.634274 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:36:44.765968 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:45.014942 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:45.014942 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:45.037216 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:36:45.266186 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:45.511142 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:45.513518 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:45.535976 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:36:45.765241 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:46.011850 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:46.013128 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:46.111509 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:36:46.265306 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:46.510749 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:46.512823 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:46.535760 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:36:46.765412 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:47.018688 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:47.019120 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:47.037090 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:36:47.266278 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:47.514127 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:47.517645 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:47.535941 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:36:47.766717 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:48.015199 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:48.016178 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:48.036937 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:36:48.266577 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:48.510541 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:48.513938 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:48.536490 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:36:48.555576 1121483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 13:36:48.766816 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:49.011984 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:49.028405 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:49.040086 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:36:49.503756 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:49.512682 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:49.517263 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:49.535246 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:36:49.767697 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:49.862723 1121483 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.307076728s)
	W0908 13:36:49.862779 1121483 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:36:49.862808 1121483 retry.go:31] will retry after 8.22033622s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:36:50.016421 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:50.016946 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:50.037069 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:36:50.264490 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:50.513929 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:50.514286 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:50.535202 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:36:50.770735 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:51.011325 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:51.012573 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:51.036022 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:36:51.266303 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:51.510741 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:51.511360 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:51.534427 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:36:51.768325 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:52.015943 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:52.016033 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:52.036974 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:36:52.267255 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:52.515555 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:52.515786 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:52.537539 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:36:52.766573 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:53.019567 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:53.019567 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:53.034668 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:36:53.270556 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:53.524132 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:53.526432 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:53.535459 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:36:53.780069 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:54.018488 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:54.024613 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:54.039017 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:36:54.266471 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:54.521813 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:54.537124 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:54.541709 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:36:54.785760 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:55.026100 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:55.028350 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:55.037601 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:36:55.269207 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:55.521834 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:55.525492 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:55.539250 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:36:55.859039 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:56.015144 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:56.015236 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:56.036183 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:36:56.269717 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:56.520750 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:56.528136 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:56.541055 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:36:56.769346 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:57.020493 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:57.020887 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:57.039853 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:36:57.267314 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:57.511007 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:57.514207 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:57.538954 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:36:57.769783 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:58.014277 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:58.015263 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:58.035972 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:36:58.084256 1121483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 13:36:58.268314 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:58.514645 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:58.515772 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:58.536051 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:36:59.092381 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:59.092414 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:36:59.092428 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:59.093596 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:59.267375 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:36:59.513179 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:36:59.514727 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:36:59.534172 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:36:59.690881 1121483 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.606560651s)
	W0908 13:36:59.690943 1121483 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:36:59.690984 1121483 retry.go:31] will retry after 10.659359638s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:36:59.767741 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:00.017137 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:00.020525 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:00.036550 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:00.265553 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:00.514834 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:00.516213 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:00.536420 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:00.769011 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:01.014009 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:01.014682 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:01.048077 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:01.312377 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:01.514378 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:01.517382 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:01.535595 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:01.767715 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:02.018920 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:02.019181 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:02.036874 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:02.268728 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:02.510766 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:02.515541 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:02.537223 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:02.765914 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:03.011601 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:03.012713 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:03.036632 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:03.267969 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:03.516869 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:03.517218 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:03.538836 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:03.764499 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:04.021827 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:04.021848 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:04.036797 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:04.264915 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:04.511970 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:04.515333 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:04.537519 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:04.766709 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:05.011070 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:05.014562 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:05.036502 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:05.264977 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:05.517752 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:05.518605 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:05.535897 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:05.764877 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:06.015713 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:06.018491 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:06.036003 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:06.270274 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:06.517610 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:06.519238 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:06.534993 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:06.776875 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:07.138075 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:07.143374 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:07.143984 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:07.265452 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:07.511845 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:07.514247 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:07.536033 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:07.770247 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:08.013232 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:08.015405 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:08.036102 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:08.265697 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:08.511424 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:08.512420 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:08.535399 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:08.772509 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:09.018060 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:09.021960 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:09.035474 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:09.265455 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:09.514642 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:09.516550 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:09.540691 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:09.768975 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:10.011227 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:10.011442 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:10.035816 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:10.269291 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:10.351525 1121483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 13:37:10.514000 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:10.515144 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:10.535406 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:10.764397 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:11.017857 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:11.018032 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:11.036140 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:11.269086 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:11.512514 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:11.513427 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:11.535099 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:11.568449 1121483 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.216865859s)
	W0908 13:37:11.568509 1121483 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:37:11.568544 1121483 retry.go:31] will retry after 17.675547462s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:37:11.765731 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:12.015871 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:12.016034 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:12.037111 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:12.265717 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:12.510726 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:12.511379 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:12.535639 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:12.764174 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:13.012753 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:13.012895 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:13.034963 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:13.266474 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:13.511678 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:13.512936 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:13.535068 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:13.765196 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:14.010929 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:14.013634 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:14.035196 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:14.265336 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:14.510075 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:14.511615 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:14.535087 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:14.765677 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:15.010472 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:15.014104 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:15.034306 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:15.265592 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:15.510605 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:15.511305 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:15.534306 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:15.766109 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:16.011831 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:16.016485 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:16.035228 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:16.266114 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:16.511234 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:16.511680 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:16.534961 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:16.771388 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:17.051334 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:17.052043 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:17.052271 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:17.265711 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:17.511201 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:17.512671 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:17.534989 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:17.764888 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:18.011466 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:18.013341 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:18.035403 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:18.266380 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:18.509713 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:18.512851 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:18.535076 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:18.766610 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:19.011000 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:19.013073 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:19.035197 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:19.264527 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:19.511705 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:19.511804 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:19.535366 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:19.767839 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:20.014799 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:20.015079 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:20.035147 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:20.266145 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:20.510438 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:20.512509 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:20.534341 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:20.766481 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:21.011812 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:21.013069 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:21.034218 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:21.266080 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:21.511854 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:21.512829 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:21.536381 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:21.765573 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:22.014034 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:22.015924 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:22.035740 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:22.264222 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:22.510219 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:22.512509 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:22.535197 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:22.765437 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:23.012113 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:23.015728 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:23.035322 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:23.266319 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:23.511291 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:23.513654 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:23.535754 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:23.765545 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:24.010959 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:24.011642 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:24.037357 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:24.265887 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:24.510242 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:24.512390 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:24.536470 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:24.766025 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:25.013111 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:25.013472 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:25.035148 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:25.265522 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:25.510332 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:25.512112 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:25.535581 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:25.765035 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:26.012148 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:26.012511 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:26.035673 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:26.265211 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:26.510047 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:26.511334 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:26.535682 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:26.764468 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:27.014667 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:27.015198 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:27.034682 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:27.265398 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:27.510467 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:27.510912 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:27.534635 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:27.765705 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:28.011353 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:28.011518 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:28.035262 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:28.266328 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:28.510638 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:28.511270 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:28.535571 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:28.764159 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:29.011274 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:29.011620 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:29.035345 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:29.245277 1121483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 13:37:29.266389 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:29.511324 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:29.514197 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:29.537082 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:29.769371 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:30.015847 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:30.021380 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0908 13:37:30.033853 1121483 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:37:30.033908 1121483 retry.go:31] will retry after 14.391938226s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:37:30.035962 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:30.265478 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:30.510272 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:30.512725 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:30.534765 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:30.764960 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:31.011509 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:31.012224 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:31.034671 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:31.264611 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:31.510310 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:31.511288 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:31.534703 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:31.765211 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:32.011923 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:32.014048 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:32.035126 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:32.265373 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:32.511814 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:32.512179 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:32.533933 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:32.766269 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:33.009912 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:33.011851 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:33.035121 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:33.265719 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:33.510372 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:33.512989 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:33.535361 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:33.765648 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:34.012575 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:34.015490 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:34.035502 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:34.266128 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:34.510676 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:34.511420 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:34.535529 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:34.764516 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:35.010405 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:35.012812 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:35.035196 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:35.265333 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:35.510033 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:35.511618 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:35.534709 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:35.765269 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:36.010444 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:36.013182 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:36.034981 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:36.264661 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:36.510942 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:36.512856 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:36.535953 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:36.765012 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:37.012717 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:37.014074 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:37.034775 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:37.266159 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:37.511215 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:37.512749 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:37.535641 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:37.765528 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:38.011791 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:38.013817 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:38.035319 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:38.266269 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:38.510009 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:38.511494 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:38.534615 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:38.769555 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:39.015352 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:39.017713 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:39.037439 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:39.268019 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:39.515233 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:39.515434 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:39.534789 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:39.767831 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:40.016676 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:40.019205 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:40.034818 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:40.265283 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:40.512271 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:40.512552 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:40.534854 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:40.766410 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:41.010524 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:41.012898 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:41.035379 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:41.265689 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:41.511405 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:41.511445 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:41.535174 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:41.766543 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:42.010901 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:42.013992 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:42.036292 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:42.265660 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:42.511609 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:42.513045 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:42.536141 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:42.764720 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:43.012106 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:43.012807 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:43.035734 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:43.264598 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:43.510642 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:43.511866 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:43.535014 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:43.765604 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:44.012407 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:44.014242 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:44.036927 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:44.264369 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:44.426454 1121483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 13:37:44.514422 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:44.516842 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:44.538439 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:44.766606 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:45.013783 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:45.013903 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:45.034827 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0908 13:37:45.183841 1121483 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:37:45.183900 1121483 retry.go:31] will retry after 20.524407059s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:37:45.265218 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:45.511829 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:45.512936 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:45.535899 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:45.765704 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:46.010889 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:46.012108 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:46.034876 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:46.264714 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:46.510684 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:46.512496 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:46.534880 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:46.764976 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:47.012725 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:47.012875 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:47.035485 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:47.264166 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:47.510959 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:47.512287 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:47.534399 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:47.765400 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:48.013716 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:48.015987 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:48.035126 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:48.265176 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:48.510240 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:48.511329 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:48.534749 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:48.764630 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:49.010539 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:49.012553 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:49.036225 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:49.265666 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:49.510848 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:49.510889 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:49.536061 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:49.766153 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:50.009965 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:50.013502 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:50.034806 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:50.265689 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:50.510313 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:50.510642 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:50.535884 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:50.765292 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:51.011094 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:51.013941 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:51.035445 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:51.264308 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:51.510459 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:51.513212 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:51.534600 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:51.766598 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:52.016375 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:52.017110 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:52.035908 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:52.265200 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:52.509768 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:52.511620 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:52.535005 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:52.766128 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:53.012736 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:53.013270 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:53.034782 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:53.264807 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:53.510513 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:53.512263 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:53.534912 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:53.768897 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:54.011198 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:54.011515 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:54.034014 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:54.265341 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:54.514659 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:54.514959 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:54.537486 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:54.769697 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:55.015531 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:55.022205 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:55.037645 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:55.266537 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:55.512765 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:55.514187 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:55.540157 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:55.768299 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:56.012717 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:56.012726 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:56.037454 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:56.265570 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:56.515435 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:56.516157 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:56.535287 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:56.766808 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:57.015452 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:57.017832 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:57.036026 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:57.265301 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:57.510363 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:57.512259 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:57.535186 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:57.764999 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:58.019056 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:58.023903 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:58.037020 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:58.268012 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:58.516605 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:58.518097 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:58.538949 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:58.765685 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:59.013241 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:59.013443 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:59.036376 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:59.269632 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:37:59.514933 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:37:59.515319 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:37:59.750625 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:37:59.770780 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:38:00.015522 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:38:00.016004 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:00.039145 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:00.266086 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:38:00.512735 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:00.514021 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:38:00.537616 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:00.765704 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:38:01.018132 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:01.018242 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:38:01.037830 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:01.267478 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:38:01.512843 1121483 kapi.go:107] duration metric: took 1m36.505105753s to wait for kubernetes.io/minikube-addons=registry ...
	I0908 13:38:01.513932 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:01.537386 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:01.774537 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:38:02.011212 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:02.034021 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:02.271858 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:38:02.514162 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:02.534803 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:02.775116 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:38:03.014123 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:03.036795 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:03.268814 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:38:03.511944 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:03.535272 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:03.765392 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:38:04.015772 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:04.036947 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:04.269416 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:38:04.517971 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:04.539163 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:04.765118 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:38:05.016234 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:05.037434 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:05.268600 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:38:05.511427 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:05.534790 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:05.709135 1121483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 13:38:05.767902 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:38:06.012114 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:06.036027 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:06.267329 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:38:06.514696 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:06.544516 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:07.079451 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:07.079522 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:07.080509 1121483 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.371325454s)
	W0908 13:38:07.080559 1121483 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:38:07.080641 1121483 main.go:141] libmachine: Making call to close driver server
	I0908 13:38:07.080657 1121483 main.go:141] libmachine: (addons-674449) Calling .Close
	I0908 13:38:07.080976 1121483 main.go:141] libmachine: Successfully made call to close driver server
	I0908 13:38:07.081069 1121483 main.go:141] libmachine: (addons-674449) DBG | Closing plugin on server side
	I0908 13:38:07.081098 1121483 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 13:38:07.081095 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:38:07.081110 1121483 main.go:141] libmachine: Making call to close driver server
	I0908 13:38:07.081163 1121483 main.go:141] libmachine: (addons-674449) Calling .Close
	I0908 13:38:07.081456 1121483 main.go:141] libmachine: Successfully made call to close driver server
	I0908 13:38:07.081498 1121483 main.go:141] libmachine: Making call to close connection to plugin binary
	W0908 13:38:07.081648 1121483 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0908 13:38:07.266825 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:38:07.515863 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:07.536577 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:07.765022 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:38:08.014840 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:08.037159 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:08.267498 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:38:08.513103 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:08.537045 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:08.764682 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:38:09.013348 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:09.037300 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:09.274228 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:38:09.517317 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:09.603051 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:09.765688 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:38:10.020807 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:10.039062 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:10.268498 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:38:10.514811 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:10.536693 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:10.768412 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:38:11.014925 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:11.045669 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:11.266446 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:38:11.512836 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:11.535157 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:11.767604 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:38:12.015159 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:12.038989 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:12.265220 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:38:12.509915 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:12.540596 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:12.768784 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:38:13.013662 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:13.039039 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:13.267701 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:38:13.519567 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:13.543509 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:13.773171 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:38:14.012879 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:14.034969 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:14.267717 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:38:14.517114 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:14.537051 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:14.770494 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:38:15.019133 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:15.046058 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:15.266084 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:38:15.511403 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:15.535152 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:15.769539 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:38:16.012410 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:16.034491 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:16.267099 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:38:16.511203 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:16.534079 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:16.766412 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:38:17.011078 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:17.035332 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:17.272359 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:38:17.511062 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:17.534725 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:17.765369 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:38:18.011076 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:18.035281 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:18.266636 1121483 kapi.go:107] duration metric: took 1m51.50608942s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0908 13:38:18.510366 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:18.534318 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:19.014401 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:19.038414 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:19.511173 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:19.534639 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:20.011029 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:20.035676 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:20.512339 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:20.535414 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:21.011132 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:21.035444 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:21.511398 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:21.535297 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:22.011762 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:22.035445 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:22.511525 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:22.535127 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:23.010533 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:23.034853 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:23.511007 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:23.536643 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:24.011848 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:24.034452 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:24.512481 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:24.535460 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:25.010559 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:25.034920 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:25.511411 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:25.535310 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:26.012193 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:26.036554 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:26.511860 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:26.534786 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:27.011042 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:27.035865 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:27.510739 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:27.537092 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:28.010725 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:28.035883 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:28.511057 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:28.535577 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:29.011793 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:29.034436 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:29.510851 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:29.535050 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:30.010543 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:30.035849 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:30.511579 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:30.535119 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:31.010116 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:31.036156 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:31.510307 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:31.534887 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:32.010539 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:32.035523 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:32.511860 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:32.534899 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:33.010012 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:33.035473 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:33.511338 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:33.534991 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:34.010634 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:34.034310 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:34.511826 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:34.535023 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:35.011137 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:35.034448 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:35.510938 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:35.535308 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:36.011068 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:36.035473 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:36.512091 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:36.535953 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:37.012172 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:37.037049 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:37.509875 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:37.535272 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:38.012406 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:38.036031 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:38.511517 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:38.534587 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:39.014092 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:39.036451 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:39.512213 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:39.535435 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:40.013375 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:40.034312 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:40.512872 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:40.537913 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:41.012264 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:41.034682 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:41.511081 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:41.540783 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:42.188437 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:42.189562 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:42.515819 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:42.535467 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:43.029863 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:43.038895 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:43.569492 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:43.569675 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:44.015536 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:44.036259 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:44.512410 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:44.537211 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:45.021844 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:45.040762 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:45.512132 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:45.536365 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:46.013305 1121483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:38:46.035719 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:46.520339 1121483 kapi.go:107] duration metric: took 2m21.514207563s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0908 13:38:46.538738 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:47.038520 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:47.535925 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:48.035210 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:48.536961 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:49.037064 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:49.534838 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:50.035195 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:50.542026 1121483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:38:51.036796 1121483 kapi.go:107] duration metric: took 2m21.505789502s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0908 13:38:51.038434 1121483 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-674449 cluster.
	I0908 13:38:51.039737 1121483 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0908 13:38:51.041058 1121483 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0908 13:38:51.042457 1121483 out.go:179] * Enabled addons: cloud-spanner, ingress-dns, nvidia-device-plugin, registry-creds, amd-gpu-device-plugin, storage-provisioner, metrics-server, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0908 13:38:51.043921 1121483 addons.go:514] duration metric: took 2m37.139109829s for enable addons: enabled=[cloud-spanner ingress-dns nvidia-device-plugin registry-creds amd-gpu-device-plugin storage-provisioner metrics-server yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0908 13:38:51.044025 1121483 start.go:246] waiting for cluster config update ...
	I0908 13:38:51.044061 1121483 start.go:255] writing updated cluster config ...
	I0908 13:38:51.044383 1121483 ssh_runner.go:195] Run: rm -f paused
	I0908 13:38:51.051508 1121483 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 13:38:51.055428 1121483 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jp2nv" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:38:51.061789 1121483 pod_ready.go:94] pod "coredns-66bc5c9577-jp2nv" is "Ready"
	I0908 13:38:51.061821 1121483 pod_ready.go:86] duration metric: took 6.367423ms for pod "coredns-66bc5c9577-jp2nv" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:38:51.064714 1121483 pod_ready.go:83] waiting for pod "etcd-addons-674449" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:38:51.070265 1121483 pod_ready.go:94] pod "etcd-addons-674449" is "Ready"
	I0908 13:38:51.070309 1121483 pod_ready.go:86] duration metric: took 5.570994ms for pod "etcd-addons-674449" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:38:51.073086 1121483 pod_ready.go:83] waiting for pod "kube-apiserver-addons-674449" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:38:51.080457 1121483 pod_ready.go:94] pod "kube-apiserver-addons-674449" is "Ready"
	I0908 13:38:51.080493 1121483 pod_ready.go:86] duration metric: took 7.37616ms for pod "kube-apiserver-addons-674449" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:38:51.082828 1121483 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-674449" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:38:51.456875 1121483 pod_ready.go:94] pod "kube-controller-manager-addons-674449" is "Ready"
	I0908 13:38:51.456906 1121483 pod_ready.go:86] duration metric: took 374.041966ms for pod "kube-controller-manager-addons-674449" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:38:51.658119 1121483 pod_ready.go:83] waiting for pod "kube-proxy-qr6fm" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:38:52.056756 1121483 pod_ready.go:94] pod "kube-proxy-qr6fm" is "Ready"
	I0908 13:38:52.056789 1121483 pod_ready.go:86] duration metric: took 398.633311ms for pod "kube-proxy-qr6fm" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:38:52.258528 1121483 pod_ready.go:83] waiting for pod "kube-scheduler-addons-674449" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:38:52.656639 1121483 pod_ready.go:94] pod "kube-scheduler-addons-674449" is "Ready"
	I0908 13:38:52.656676 1121483 pod_ready.go:86] duration metric: took 398.113863ms for pod "kube-scheduler-addons-674449" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:38:52.656688 1121483 pod_ready.go:40] duration metric: took 1.605136243s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 13:38:52.705473 1121483 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0908 13:38:52.707409 1121483 out.go:179] * Done! kubectl is now configured to use "addons-674449" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 08 13:42:11 addons-674449 crio[828]: time="2025-09-08 13:42:11.977921479Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d72582b2-de6e-4a85-a0f5-cdcfab0f5219 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 13:42:11 addons-674449 crio[828]: time="2025-09-08 13:42:11.978459040Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b15e64cf080128acad87e6e3f638c936387259da6213dccc02d5398677d94135,PodSandboxId:b2b244ac2fdcaf40d932d1fbe46ad8304880e1e32d17a40724309fad1a57f346,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_CREATED,CreatedAt:1757338931876033033,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d498dc89-xtmkg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b0889bf2-d0c6-44fd-b019-b939fc33dc0c,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.p
orts: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:697ed374e1bbcd9508abdac530e549d3e892b1901488ee2778877d0dbb1f4bac,PodSandboxId:2dfb98b48df73417cb7f2e22d09c2603725d7f79e72178f2b1d5a7217ec9063a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9,State:CONTAINER_RUNNING,CreatedAt:1757338787230256380,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c90e9767-4367-4541-88a3-b800f3b971db,},Annotations:map[string]string{io.kubernete
s.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6bc7e5c021905fd911c1f158e9312ddbb1b89c5abfc8137585d79f9dfb290c0,PodSandboxId:c49fbe23eccc1b62393bab145ea2c84cee52d7bf3c973e493194d2ed9d541e75,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:bbca49510385effd4fe27be07dd12b845f530f6abbbaa06ef35ff7b4ae06cc39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3bd49f1f42c46a0def0e208037f718f7a122902ffa846fd7c2757e017c1ee29e,State:CONTAINER_RUNNING,CreatedAt:1757338778878669135,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-85f8f8dc54-kbn9j,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid
: 8dfed74d-34c1-4f83-90f0-a4ffb9d80a5b,},Annotations:map[string]string{io.kubernetes.container.hash: 22a1aabb,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1004edf588e41248d483d2cac0fd02dffb8e43b82f8ce3fdebe80f03808fbb38,PodSandboxId:921508025c05741eaa11ca5c8a4be6b635f6c1935725e1d178807c30143e2cf1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1757338736154254979,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kuberne
tes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f965c06e-67a7-4092-9b85-b30957e5cec1,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0b72d4d164370bd6629513e46dee6ec7a9df4e7f557f4e125cdcb77a1cc90f7,PodSandboxId:98576a25ad52fe9d5d37533cb76517f7efca247a477e7d54e811de4d6b2c0ac1,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1757338726286575663,Labels:map[string]string{io.kubernetes.container.name: controller,io.kuber
netes.pod.name: ingress-nginx-controller-9cc49f96f-8qxx6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b3e4024c-170b-496f-a454-a099404f18b5,},Annotations:map[string]string{io.kubernetes.container.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:f526ebd282920a84a7e42550aa834424d5e648ec9f63e9c9734bb15ab0e5d7f6,PodSandboxId:7816bfab9d63b2ae2c66a03199043de765e7e58df3135e8059ad5fe7b11f510a,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:8c217
da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1757338697151331109,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-tfcvr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 19fd6217-de22-4f1d-9a9a-d307484acc19,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63b6dd3790a7481420d56157ab73430d6765705b568dbe78ae1c0be03dffad1f,PodSandboxId:ff70bf87c082056b3898b5216b61a172496a4879878846fe3a97f395962e1fa4,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-ngi
nx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1757338683853319201,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-r7kz8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6985e98c-9486-46bc-817f-51244ea6dcea,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0db4a5aba1bccd47c10a1120cb72c12e134f9df58b4130d539d87beb6fa2370b,PodSandboxId:72f14cc46e23ec6583af04f4eea8f78ef7de65f9c2bd8561937d142685dc6229,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&Image
Spec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:5768867a6776f266b9c9c6b8b32a069f9346493f7bede50c3dcb28859f36d506,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c0845d92124611df4baa18fd96892b107dd9c28f6178793e020b10264622c27b,State:CONTAINER_RUNNING,CreatedAt:1757338676421514389,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-84fdd,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 310f336f-3bf3-4b6d-898f-9ca64c3c855b,},Annotations:map[string]string{io.kubernetes.container.hash: 8e2c4a14,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51c3ab7504df0bd6c39089f4677beed01f534414a632dcdab9bedbace8572817,PodSandboxId:e8ccb056ce5abc93eb61c2e7e8496416a
31a8c8a96b9905d2887c6a26f31af2c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1757338631964122090,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-56j4z,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 72454afa-6441-4888-b636-fa1e4598bdcf,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af088e13c26ac9ac7b1c86ce2e5c29fc79735c0540
1346447d5a22fe93704aea,PodSandboxId:50389f222239ee0c71c98b52f117006bc1555199bb141f1925bcc91654069d2b,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1757338623108492743,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab8c49c1-9369-40b5-88b2-c345b39fc2a1,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.t
erminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9b16fefaa769899c4fb445716899196e05b4f815dfa4a2679e54aa8d4984b81,PodSandboxId:c181eb9b86ea37048963318eb56562ca66ccd70fb4545d9a8ce479efc69e4e68,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1757338588394899408,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03f506b1-95ba-4374-a259-a48aff54cbf3,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMe
ssagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae8acff52bc93907f6c3a90560e466986e532548b4f683a0335da08384853fc5,PodSandboxId:d9e7ae24d99a996359fc1e863a6c0d9a54c152ab6dfed41c83c0bc5108be396c,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1757338583916113148,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-kzxl5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3f56fe0-40a6-4ffe-b7de-7663f12a383a,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67e1057220717ed9cdcdad38ab351e4e2cbafd1dda0b42ca854b04be1f9ec2cb,PodSandboxId:d11dfc1f6b48b221fa32b4843beea039899146aa46d08d6c3c3dd20e0c860645,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1757338575725317595,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-jp2nv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62f949b8-763b-4d38-bd0b-435148046042,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"
protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f12b9c2882a227d5713df9ab6340cd0fc18bd1cc6cb3d75c2b710ad07fe1b1ed,PodSandboxId:ec9111b8bb7249aa752a9e8dda23d1811e7a7bc9f1a49876b155246f0529c0d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1757338574188814617,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubern
etes.pod.name: kube-proxy-qr6fm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e15a8522-5091-4bff-b63d-188d2ecb8629,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d74d4f6083f161a043a27bb70ecdb49ea0b93c99cecf51776a44876f56736b4,PodSandboxId:53620840ada8142996f5e1637747df7a38b5e04c6aa6bf0728cac1430f4523ab,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1757338562204339881,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-schedu
ler-addons-674449,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b472955dcdc524b38e2239d290b22d25,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b2cb7fe6974af584b1f0aa27c54c03f298f4b567258fa89ca65b5e2448d5144,PodSandboxId:758e5e705749d41e6b81a0e542bb2845016334f116014979644ffbc40e8994ff,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1757338562
190574280,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-674449,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 543412c087226072ca67a129000c9f1d,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f9a21b035bc67d1c99924226b5e5b223df382ff4db2d3a808aa3c740551a11c,PodSandboxId:2e258ed4a4e52b978fde50e0fd231af3dafa7c70a96e4da0e2cbbbcce310f6ec,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandl
er:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1757338562137443196,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-674449,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e22c20da7865995a36eb339e1e8004a,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:584e534d9f2fb0783086698690ed95dcac9ba84c5b393b9779d0d7204087bc1b,PodSandboxId:2a3fb6ae03d4987f136eab258302367d8249d037a03d416c5d49076645fc0bf8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d1
6e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1757338562118795572,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-674449,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 391d65c4e76aa9fc172cf2177cf0d7ed,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d72582b2-de6e-4a85-a0f5-cdcfab0f5219 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 13:42:11 addons-674449 crio[828]: time="2025-09-08 13:42:11.992376986Z" level=debug msg="Response: &StartContainerResponse{}" file="otel-collector/interceptors.go:74" id=d98eda79-fefe-4fe2-b6f4-e65e3ceae60c name=/runtime.v1.RuntimeService/StartContainer
	Sep 08 13:42:12 addons-674449 crio[828]: time="2025-09-08 13:42:12.028772405Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fa974963-4d85-46ef-8588-0d6d6042be8b name=/runtime.v1.RuntimeService/Version
	Sep 08 13:42:12 addons-674449 crio[828]: time="2025-09-08 13:42:12.028861188Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fa974963-4d85-46ef-8588-0d6d6042be8b name=/runtime.v1.RuntimeService/Version
	Sep 08 13:42:12 addons-674449 crio[828]: time="2025-09-08 13:42:12.029869987Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f278cec0-1a16-4aa7-8643-ccb8b02b6afb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 13:42:12 addons-674449 crio[828]: time="2025-09-08 13:42:12.031392641Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757338932031366367,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:605485,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f278cec0-1a16-4aa7-8643-ccb8b02b6afb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 13:42:12 addons-674449 crio[828]: time="2025-09-08 13:42:12.031993594Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ecab2171-9acd-45bc-b631-ec0337c6f07e name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 13:42:12 addons-674449 crio[828]: time="2025-09-08 13:42:12.032069097Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ecab2171-9acd-45bc-b631-ec0337c6f07e name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 13:42:12 addons-674449 crio[828]: time="2025-09-08 13:42:12.032537300Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b15e64cf080128acad87e6e3f638c936387259da6213dccc02d5398677d94135,PodSandboxId:b2b244ac2fdcaf40d932d1fbe46ad8304880e1e32d17a40724309fad1a57f346,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1757338931876033033,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d498dc89-xtmkg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b0889bf2-d0c6-44fd-b019-b939fc33dc0c,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.p
orts: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:697ed374e1bbcd9508abdac530e549d3e892b1901488ee2778877d0dbb1f4bac,PodSandboxId:2dfb98b48df73417cb7f2e22d09c2603725d7f79e72178f2b1d5a7217ec9063a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9,State:CONTAINER_RUNNING,CreatedAt:1757338787230256380,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c90e9767-4367-4541-88a3-b800f3b971db,},Annotations:map[string]string{io.kubernete
s.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6bc7e5c021905fd911c1f158e9312ddbb1b89c5abfc8137585d79f9dfb290c0,PodSandboxId:c49fbe23eccc1b62393bab145ea2c84cee52d7bf3c973e493194d2ed9d541e75,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:bbca49510385effd4fe27be07dd12b845f530f6abbbaa06ef35ff7b4ae06cc39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3bd49f1f42c46a0def0e208037f718f7a122902ffa846fd7c2757e017c1ee29e,State:CONTAINER_RUNNING,CreatedAt:1757338778878669135,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-85f8f8dc54-kbn9j,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid
: 8dfed74d-34c1-4f83-90f0-a4ffb9d80a5b,},Annotations:map[string]string{io.kubernetes.container.hash: 22a1aabb,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1004edf588e41248d483d2cac0fd02dffb8e43b82f8ce3fdebe80f03808fbb38,PodSandboxId:921508025c05741eaa11ca5c8a4be6b635f6c1935725e1d178807c30143e2cf1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1757338736154254979,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kuberne
tes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f965c06e-67a7-4092-9b85-b30957e5cec1,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0b72d4d164370bd6629513e46dee6ec7a9df4e7f557f4e125cdcb77a1cc90f7,PodSandboxId:98576a25ad52fe9d5d37533cb76517f7efca247a477e7d54e811de4d6b2c0ac1,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1757338726286575663,Labels:map[string]string{io.kubernetes.container.name: controller,io.kuber
netes.pod.name: ingress-nginx-controller-9cc49f96f-8qxx6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b3e4024c-170b-496f-a454-a099404f18b5,},Annotations:map[string]string{io.kubernetes.container.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:f526ebd282920a84a7e42550aa834424d5e648ec9f63e9c9734bb15ab0e5d7f6,PodSandboxId:7816bfab9d63b2ae2c66a03199043de765e7e58df3135e8059ad5fe7b11f510a,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:8c217
da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1757338697151331109,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-tfcvr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 19fd6217-de22-4f1d-9a9a-d307484acc19,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63b6dd3790a7481420d56157ab73430d6765705b568dbe78ae1c0be03dffad1f,PodSandboxId:ff70bf87c082056b3898b5216b61a172496a4879878846fe3a97f395962e1fa4,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-ngi
nx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1757338683853319201,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-r7kz8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6985e98c-9486-46bc-817f-51244ea6dcea,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0db4a5aba1bccd47c10a1120cb72c12e134f9df58b4130d539d87beb6fa2370b,PodSandboxId:72f14cc46e23ec6583af04f4eea8f78ef7de65f9c2bd8561937d142685dc6229,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&Image
Spec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:5768867a6776f266b9c9c6b8b32a069f9346493f7bede50c3dcb28859f36d506,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c0845d92124611df4baa18fd96892b107dd9c28f6178793e020b10264622c27b,State:CONTAINER_RUNNING,CreatedAt:1757338676421514389,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-84fdd,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 310f336f-3bf3-4b6d-898f-9ca64c3c855b,},Annotations:map[string]string{io.kubernetes.container.hash: 8e2c4a14,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51c3ab7504df0bd6c39089f4677beed01f534414a632dcdab9bedbace8572817,PodSandboxId:e8ccb056ce5abc93eb61c2e7e8496416a
31a8c8a96b9905d2887c6a26f31af2c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1757338631964122090,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-56j4z,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 72454afa-6441-4888-b636-fa1e4598bdcf,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af088e13c26ac9ac7b1c86ce2e5c29fc79735c0540
1346447d5a22fe93704aea,PodSandboxId:50389f222239ee0c71c98b52f117006bc1555199bb141f1925bcc91654069d2b,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1757338623108492743,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab8c49c1-9369-40b5-88b2-c345b39fc2a1,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.t
erminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9b16fefaa769899c4fb445716899196e05b4f815dfa4a2679e54aa8d4984b81,PodSandboxId:c181eb9b86ea37048963318eb56562ca66ccd70fb4545d9a8ce479efc69e4e68,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1757338588394899408,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03f506b1-95ba-4374-a259-a48aff54cbf3,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMe
ssagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae8acff52bc93907f6c3a90560e466986e532548b4f683a0335da08384853fc5,PodSandboxId:d9e7ae24d99a996359fc1e863a6c0d9a54c152ab6dfed41c83c0bc5108be396c,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1757338583916113148,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-kzxl5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3f56fe0-40a6-4ffe-b7de-7663f12a383a,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67e1057220717ed9cdcdad38ab351e4e2cbafd1dda0b42ca854b04be1f9ec2cb,PodSandboxId:d11dfc1f6b48b221fa32b4843beea039899146aa46d08d6c3c3dd20e0c860645,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1757338575725317595,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-jp2nv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62f949b8-763b-4d38-bd0b-435148046042,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"
protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f12b9c2882a227d5713df9ab6340cd0fc18bd1cc6cb3d75c2b710ad07fe1b1ed,PodSandboxId:ec9111b8bb7249aa752a9e8dda23d1811e7a7bc9f1a49876b155246f0529c0d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1757338574188814617,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubern
etes.pod.name: kube-proxy-qr6fm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e15a8522-5091-4bff-b63d-188d2ecb8629,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d74d4f6083f161a043a27bb70ecdb49ea0b93c99cecf51776a44876f56736b4,PodSandboxId:53620840ada8142996f5e1637747df7a38b5e04c6aa6bf0728cac1430f4523ab,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1757338562204339881,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-schedu
ler-addons-674449,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b472955dcdc524b38e2239d290b22d25,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b2cb7fe6974af584b1f0aa27c54c03f298f4b567258fa89ca65b5e2448d5144,PodSandboxId:758e5e705749d41e6b81a0e542bb2845016334f116014979644ffbc40e8994ff,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1757338562
190574280,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-674449,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 543412c087226072ca67a129000c9f1d,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f9a21b035bc67d1c99924226b5e5b223df382ff4db2d3a808aa3c740551a11c,PodSandboxId:2e258ed4a4e52b978fde50e0fd231af3dafa7c70a96e4da0e2cbbbcce310f6ec,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandl
er:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1757338562137443196,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-674449,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e22c20da7865995a36eb339e1e8004a,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:584e534d9f2fb0783086698690ed95dcac9ba84c5b393b9779d0d7204087bc1b,PodSandboxId:2a3fb6ae03d4987f136eab258302367d8249d037a03d416c5d49076645fc0bf8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d1
6e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1757338562118795572,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-674449,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 391d65c4e76aa9fc172cf2177cf0d7ed,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ecab2171-9acd-45bc-b631-ec0337c6f07e name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 13:42:12 addons-674449 crio[828]: time="2025-09-08 13:42:12.052336791Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e4fc3372-fefe-4330-a1e8-7f90d353347c name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 13:42:12 addons-674449 crio[828]: time="2025-09-08 13:42:12.052431611Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e4fc3372-fefe-4330-a1e8-7f90d353347c name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 13:42:12 addons-674449 crio[828]: time="2025-09-08 13:42:12.054295542Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b15e64cf080128acad87e6e3f638c936387259da6213dccc02d5398677d94135,PodSandboxId:b2b244ac2fdcaf40d932d1fbe46ad8304880e1e32d17a40724309fad1a57f346,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1757338931876033033,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d498dc89-xtmkg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b0889bf2-d0c6-44fd-b019-b939fc33dc0c,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.p
orts: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:697ed374e1bbcd9508abdac530e549d3e892b1901488ee2778877d0dbb1f4bac,PodSandboxId:2dfb98b48df73417cb7f2e22d09c2603725d7f79e72178f2b1d5a7217ec9063a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9,State:CONTAINER_RUNNING,CreatedAt:1757338787230256380,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c90e9767-4367-4541-88a3-b800f3b971db,},Annotations:map[string]string{io.kubernete
s.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6bc7e5c021905fd911c1f158e9312ddbb1b89c5abfc8137585d79f9dfb290c0,PodSandboxId:c49fbe23eccc1b62393bab145ea2c84cee52d7bf3c973e493194d2ed9d541e75,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:bbca49510385effd4fe27be07dd12b845f530f6abbbaa06ef35ff7b4ae06cc39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3bd49f1f42c46a0def0e208037f718f7a122902ffa846fd7c2757e017c1ee29e,State:CONTAINER_RUNNING,CreatedAt:1757338778878669135,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-85f8f8dc54-kbn9j,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid
: 8dfed74d-34c1-4f83-90f0-a4ffb9d80a5b,},Annotations:map[string]string{io.kubernetes.container.hash: 22a1aabb,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1004edf588e41248d483d2cac0fd02dffb8e43b82f8ce3fdebe80f03808fbb38,PodSandboxId:921508025c05741eaa11ca5c8a4be6b635f6c1935725e1d178807c30143e2cf1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1757338736154254979,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kuberne
tes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f965c06e-67a7-4092-9b85-b30957e5cec1,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0b72d4d164370bd6629513e46dee6ec7a9df4e7f557f4e125cdcb77a1cc90f7,PodSandboxId:98576a25ad52fe9d5d37533cb76517f7efca247a477e7d54e811de4d6b2c0ac1,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1757338726286575663,Labels:map[string]string{io.kubernetes.container.name: controller,io.kuber
netes.pod.name: ingress-nginx-controller-9cc49f96f-8qxx6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b3e4024c-170b-496f-a454-a099404f18b5,},Annotations:map[string]string{io.kubernetes.container.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:f526ebd282920a84a7e42550aa834424d5e648ec9f63e9c9734bb15ab0e5d7f6,PodSandboxId:7816bfab9d63b2ae2c66a03199043de765e7e58df3135e8059ad5fe7b11f510a,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:8c217
da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1757338697151331109,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-tfcvr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 19fd6217-de22-4f1d-9a9a-d307484acc19,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63b6dd3790a7481420d56157ab73430d6765705b568dbe78ae1c0be03dffad1f,PodSandboxId:ff70bf87c082056b3898b5216b61a172496a4879878846fe3a97f395962e1fa4,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-ngi
nx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1757338683853319201,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-r7kz8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6985e98c-9486-46bc-817f-51244ea6dcea,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0db4a5aba1bccd47c10a1120cb72c12e134f9df58b4130d539d87beb6fa2370b,PodSandboxId:72f14cc46e23ec6583af04f4eea8f78ef7de65f9c2bd8561937d142685dc6229,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&Image
Spec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:5768867a6776f266b9c9c6b8b32a069f9346493f7bede50c3dcb28859f36d506,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c0845d92124611df4baa18fd96892b107dd9c28f6178793e020b10264622c27b,State:CONTAINER_RUNNING,CreatedAt:1757338676421514389,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-84fdd,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 310f336f-3bf3-4b6d-898f-9ca64c3c855b,},Annotations:map[string]string{io.kubernetes.container.hash: 8e2c4a14,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51c3ab7504df0bd6c39089f4677beed01f534414a632dcdab9bedbace8572817,PodSandboxId:e8ccb056ce5abc93eb61c2e7e8496416a
31a8c8a96b9905d2887c6a26f31af2c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1757338631964122090,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-56j4z,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 72454afa-6441-4888-b636-fa1e4598bdcf,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af088e13c26ac9ac7b1c86ce2e5c29fc79735c0540
1346447d5a22fe93704aea,PodSandboxId:50389f222239ee0c71c98b52f117006bc1555199bb141f1925bcc91654069d2b,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1757338623108492743,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab8c49c1-9369-40b5-88b2-c345b39fc2a1,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.t
erminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9b16fefaa769899c4fb445716899196e05b4f815dfa4a2679e54aa8d4984b81,PodSandboxId:c181eb9b86ea37048963318eb56562ca66ccd70fb4545d9a8ce479efc69e4e68,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1757338588394899408,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03f506b1-95ba-4374-a259-a48aff54cbf3,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMe
ssagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae8acff52bc93907f6c3a90560e466986e532548b4f683a0335da08384853fc5,PodSandboxId:d9e7ae24d99a996359fc1e863a6c0d9a54c152ab6dfed41c83c0bc5108be396c,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1757338583916113148,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-kzxl5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3f56fe0-40a6-4ffe-b7de-7663f12a383a,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67e1057220717ed9cdcdad38ab351e4e2cbafd1dda0b42ca854b04be1f9ec2cb,PodSandboxId:d11dfc1f6b48b221fa32b4843beea039899146aa46d08d6c3c3dd20e0c860645,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1757338575725317595,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-jp2nv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62f949b8-763b-4d38-bd0b-435148046042,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"
protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f12b9c2882a227d5713df9ab6340cd0fc18bd1cc6cb3d75c2b710ad07fe1b1ed,PodSandboxId:ec9111b8bb7249aa752a9e8dda23d1811e7a7bc9f1a49876b155246f0529c0d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1757338574188814617,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubern
etes.pod.name: kube-proxy-qr6fm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e15a8522-5091-4bff-b63d-188d2ecb8629,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d74d4f6083f161a043a27bb70ecdb49ea0b93c99cecf51776a44876f56736b4,PodSandboxId:53620840ada8142996f5e1637747df7a38b5e04c6aa6bf0728cac1430f4523ab,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1757338562204339881,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-schedu
ler-addons-674449,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b472955dcdc524b38e2239d290b22d25,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b2cb7fe6974af584b1f0aa27c54c03f298f4b567258fa89ca65b5e2448d5144,PodSandboxId:758e5e705749d41e6b81a0e542bb2845016334f116014979644ffbc40e8994ff,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1757338562
190574280,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-674449,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 543412c087226072ca67a129000c9f1d,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f9a21b035bc67d1c99924226b5e5b223df382ff4db2d3a808aa3c740551a11c,PodSandboxId:2e258ed4a4e52b978fde50e0fd231af3dafa7c70a96e4da0e2cbbbcce310f6ec,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandl
er:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1757338562137443196,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-674449,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e22c20da7865995a36eb339e1e8004a,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:584e534d9f2fb0783086698690ed95dcac9ba84c5b393b9779d0d7204087bc1b,PodSandboxId:2a3fb6ae03d4987f136eab258302367d8249d037a03d416c5d49076645fc0bf8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d1
6e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1757338562118795572,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-674449,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 391d65c4e76aa9fc172cf2177cf0d7ed,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e4fc3372-fefe-4330-a1e8-7f90d353347c name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 13:42:12 addons-674449 crio[828]: time="2025-09-08 13:42:12.057279911Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6531a69b-392e-4168-b41b-31c67b8de52d name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 13:42:12 addons-674449 crio[828]: time="2025-09-08 13:42:12.057369633Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6531a69b-392e-4168-b41b-31c67b8de52d name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 13:42:12 addons-674449 crio[828]: time="2025-09-08 13:42:12.059406408Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b15e64cf080128acad87e6e3f638c936387259da6213dccc02d5398677d94135,PodSandboxId:b2b244ac2fdcaf40d932d1fbe46ad8304880e1e32d17a40724309fad1a57f346,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1757338931876033033,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d498dc89-xtmkg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b0889bf2-d0c6-44fd-b019-b939fc33dc0c,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.p
orts: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:697ed374e1bbcd9508abdac530e549d3e892b1901488ee2778877d0dbb1f4bac,PodSandboxId:2dfb98b48df73417cb7f2e22d09c2603725d7f79e72178f2b1d5a7217ec9063a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9,State:CONTAINER_RUNNING,CreatedAt:1757338787230256380,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c90e9767-4367-4541-88a3-b800f3b971db,},Annotations:map[string]string{io.kubernete
s.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6bc7e5c021905fd911c1f158e9312ddbb1b89c5abfc8137585d79f9dfb290c0,PodSandboxId:c49fbe23eccc1b62393bab145ea2c84cee52d7bf3c973e493194d2ed9d541e75,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:bbca49510385effd4fe27be07dd12b845f530f6abbbaa06ef35ff7b4ae06cc39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3bd49f1f42c46a0def0e208037f718f7a122902ffa846fd7c2757e017c1ee29e,State:CONTAINER_RUNNING,CreatedAt:1757338778878669135,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-85f8f8dc54-kbn9j,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid
: 8dfed74d-34c1-4f83-90f0-a4ffb9d80a5b,},Annotations:map[string]string{io.kubernetes.container.hash: 22a1aabb,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1004edf588e41248d483d2cac0fd02dffb8e43b82f8ce3fdebe80f03808fbb38,PodSandboxId:921508025c05741eaa11ca5c8a4be6b635f6c1935725e1d178807c30143e2cf1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1757338736154254979,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kuberne
tes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f965c06e-67a7-4092-9b85-b30957e5cec1,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0b72d4d164370bd6629513e46dee6ec7a9df4e7f557f4e125cdcb77a1cc90f7,PodSandboxId:98576a25ad52fe9d5d37533cb76517f7efca247a477e7d54e811de4d6b2c0ac1,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1757338726286575663,Labels:map[string]string{io.kubernetes.container.name: controller,io.kuber
netes.pod.name: ingress-nginx-controller-9cc49f96f-8qxx6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b3e4024c-170b-496f-a454-a099404f18b5,},Annotations:map[string]string{io.kubernetes.container.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:f526ebd282920a84a7e42550aa834424d5e648ec9f63e9c9734bb15ab0e5d7f6,PodSandboxId:7816bfab9d63b2ae2c66a03199043de765e7e58df3135e8059ad5fe7b11f510a,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:8c217
da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1757338697151331109,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-tfcvr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 19fd6217-de22-4f1d-9a9a-d307484acc19,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63b6dd3790a7481420d56157ab73430d6765705b568dbe78ae1c0be03dffad1f,PodSandboxId:ff70bf87c082056b3898b5216b61a172496a4879878846fe3a97f395962e1fa4,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-ngi
nx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1757338683853319201,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-r7kz8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6985e98c-9486-46bc-817f-51244ea6dcea,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0db4a5aba1bccd47c10a1120cb72c12e134f9df58b4130d539d87beb6fa2370b,PodSandboxId:72f14cc46e23ec6583af04f4eea8f78ef7de65f9c2bd8561937d142685dc6229,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&Image
Spec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:5768867a6776f266b9c9c6b8b32a069f9346493f7bede50c3dcb28859f36d506,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c0845d92124611df4baa18fd96892b107dd9c28f6178793e020b10264622c27b,State:CONTAINER_RUNNING,CreatedAt:1757338676421514389,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-84fdd,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 310f336f-3bf3-4b6d-898f-9ca64c3c855b,},Annotations:map[string]string{io.kubernetes.container.hash: 8e2c4a14,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51c3ab7504df0bd6c39089f4677beed01f534414a632dcdab9bedbace8572817,PodSandboxId:e8ccb056ce5abc93eb61c2e7e8496416a
31a8c8a96b9905d2887c6a26f31af2c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1757338631964122090,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-56j4z,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 72454afa-6441-4888-b636-fa1e4598bdcf,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af088e13c26ac9ac7b1c86ce2e5c29fc79735c0540
1346447d5a22fe93704aea,PodSandboxId:50389f222239ee0c71c98b52f117006bc1555199bb141f1925bcc91654069d2b,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1757338623108492743,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab8c49c1-9369-40b5-88b2-c345b39fc2a1,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.t
erminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9b16fefaa769899c4fb445716899196e05b4f815dfa4a2679e54aa8d4984b81,PodSandboxId:c181eb9b86ea37048963318eb56562ca66ccd70fb4545d9a8ce479efc69e4e68,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1757338588394899408,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03f506b1-95ba-4374-a259-a48aff54cbf3,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMe
ssagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae8acff52bc93907f6c3a90560e466986e532548b4f683a0335da08384853fc5,PodSandboxId:d9e7ae24d99a996359fc1e863a6c0d9a54c152ab6dfed41c83c0bc5108be396c,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1757338583916113148,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-kzxl5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3f56fe0-40a6-4ffe-b7de-7663f12a383a,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67e1057220717ed9cdcdad38ab351e4e2cbafd1dda0b42ca854b04be1f9ec2cb,PodSandboxId:d11dfc1f6b48b221fa32b4843beea039899146aa46d08d6c3c3dd20e0c860645,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1757338575725317595,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-jp2nv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62f949b8-763b-4d38-bd0b-435148046042,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"
protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f12b9c2882a227d5713df9ab6340cd0fc18bd1cc6cb3d75c2b710ad07fe1b1ed,PodSandboxId:ec9111b8bb7249aa752a9e8dda23d1811e7a7bc9f1a49876b155246f0529c0d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1757338574188814617,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubern
etes.pod.name: kube-proxy-qr6fm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e15a8522-5091-4bff-b63d-188d2ecb8629,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d74d4f6083f161a043a27bb70ecdb49ea0b93c99cecf51776a44876f56736b4,PodSandboxId:53620840ada8142996f5e1637747df7a38b5e04c6aa6bf0728cac1430f4523ab,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1757338562204339881,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-schedu
ler-addons-674449,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b472955dcdc524b38e2239d290b22d25,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b2cb7fe6974af584b1f0aa27c54c03f298f4b567258fa89ca65b5e2448d5144,PodSandboxId:758e5e705749d41e6b81a0e542bb2845016334f116014979644ffbc40e8994ff,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1757338562
190574280,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-674449,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 543412c087226072ca67a129000c9f1d,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f9a21b035bc67d1c99924226b5e5b223df382ff4db2d3a808aa3c740551a11c,PodSandboxId:2e258ed4a4e52b978fde50e0fd231af3dafa7c70a96e4da0e2cbbbcce310f6ec,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandl
er:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1757338562137443196,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-674449,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e22c20da7865995a36eb339e1e8004a,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:584e534d9f2fb0783086698690ed95dcac9ba84c5b393b9779d0d7204087bc1b,PodSandboxId:2a3fb6ae03d4987f136eab258302367d8249d037a03d416c5d49076645fc0bf8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d1
6e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1757338562118795572,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-674449,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 391d65c4e76aa9fc172cf2177cf0d7ed,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6531a69b-392e-4168-b41b-31c67b8de52d name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 13:42:12 addons-674449 crio[828]: time="2025-09-08 13:42:12.061102900Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=c7644d19-9bdb-47b8-94da-0518fbdc60ee name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 08 13:42:12 addons-674449 crio[828]: time="2025-09-08 13:42:12.061557536Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:b2b244ac2fdcaf40d932d1fbe46ad8304880e1e32d17a40724309fad1a57f346,Metadata:&PodSandboxMetadata{Name:hello-world-app-5d498dc89-xtmkg,Uid:b0889bf2-d0c6-44fd-b019-b939fc33dc0c,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1757338930672054674,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-5d498dc89-xtmkg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b0889bf2-d0c6-44fd-b019-b939fc33dc0c,pod-template-hash: 5d498dc89,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-08T13:42:10.341710449Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2dfb98b48df73417cb7f2e22d09c2603725d7f79e72178f2b1d5a7217ec9063a,Metadata:&PodSandboxMetadata{Name:nginx,Uid:c90e9767-4367-4541-88a3-b800f3b971db,Namespace:default,Attempt:0,},St
ate:SANDBOX_READY,CreatedAt:1757338772906738721,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c90e9767-4367-4541-88a3-b800f3b971db,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-08T13:39:32.583339058Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c49fbe23eccc1b62393bab145ea2c84cee52d7bf3c973e493194d2ed9d541e75,Metadata:&PodSandboxMetadata{Name:headlamp-85f8f8dc54-kbn9j,Uid:8dfed74d-34c1-4f83-90f0-a4ffb9d80a5b,Namespace:headlamp,Attempt:0,},State:SANDBOX_READY,CreatedAt:1757338764064916643,Labels:map[string]string{app.kubernetes.io/instance: headlamp,app.kubernetes.io/name: headlamp,io.kubernetes.container.name: POD,io.kubernetes.pod.name: headlamp-85f8f8dc54-kbn9j,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 8dfed74d-34c1-4f83-90f0-a4ffb9d80a5b,pod-template-hash: 85f8f8dc54,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-
08T13:39:23.743264431Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:921508025c05741eaa11ca5c8a4be6b635f6c1935725e1d178807c30143e2cf1,Metadata:&PodSandboxMetadata{Name:busybox,Uid:f965c06e-67a7-4092-9b85-b30957e5cec1,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1757338733684965697,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f965c06e-67a7-4092-9b85-b30957e5cec1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-08T13:38:53.356486171Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:98576a25ad52fe9d5d37533cb76517f7efca247a477e7d54e811de4d6b2c0ac1,Metadata:&PodSandboxMetadata{Name:ingress-nginx-controller-9cc49f96f-8qxx6,Uid:b3e4024c-170b-496f-a454-a099404f18b5,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_READY,CreatedAt:1757338718971378453,Labels:map[string]string{app.kubernetes.io/component: controller,a
pp.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-8qxx6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b3e4024c-170b-496f-a454-a099404f18b5,pod-template-hash: 9cc49f96f,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-08T13:36:24.129250182Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7816bfab9d63b2ae2c66a03199043de765e7e58df3135e8059ad5fe7b11f510a,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-patch-tfcvr,Uid:19fd6217-de22-4f1d-9a9a-d307484acc19,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1757338585415325990,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,batch.kubernetes.io/controller-uid: 545e2b07-69ed-4cb4-bc0e-2337f0f60fc4,batch.kubernetes.io/job-name: ingress-n
ginx-admission-patch,controller-uid: 545e2b07-69ed-4cb4-bc0e-2337f0f60fc4,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-patch-tfcvr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 19fd6217-de22-4f1d-9a9a-d307484acc19,job-name: ingress-nginx-admission-patch,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-08T13:36:24.711541477Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ff70bf87c082056b3898b5216b61a172496a4879878846fe3a97f395962e1fa4,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-create-r7kz8,Uid:6985e98c-9486-46bc-817f-51244ea6dcea,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1757338585271564150,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,batch.kubernetes.io/controller-uid: 47288d06-6a1a-4d6d-a022-d419346b8c7f,batch.kubernetes.io/job-name: ingress-nginx-admission-create,con
troller-uid: 47288d06-6a1a-4d6d-a022-d419346b8c7f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-create-r7kz8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6985e98c-9486-46bc-817f-51244ea6dcea,job-name: ingress-nginx-admission-create,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-08T13:36:24.610543310Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:72f14cc46e23ec6583af04f4eea8f78ef7de65f9c2bd8561937d142685dc6229,Metadata:&PodSandboxMetadata{Name:gadget-84fdd,Uid:310f336f-3bf3-4b6d-898f-9ca64c3c855b,Namespace:gadget,Attempt:0,},State:SANDBOX_READY,CreatedAt:1757338583592188982,Labels:map[string]string{controller-revision-hash: 5d768b79cb,io.kubernetes.container.name: POD,io.kubernetes.pod.name: gadget-84fdd,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 310f336f-3bf3-4b6d-898f-9ca64c3c855b,k8s-app: gadget,pod-template-generation: 1,},Annotations:map[string]string{container.apparmor.security.beta.kubernet
es.io/gadget: unconfined,kubernetes.io/config.seen: 2025-09-08T13:36:22.991668762Z,kubernetes.io/config.source: api,prometheus.io/path: /metrics,prometheus.io/port: 2223,prometheus.io/scrape: true,},RuntimeHandler:,},&PodSandbox{Id:e8ccb056ce5abc93eb61c2e7e8496416a31a8c8a96b9905d2887c6a26f31af2c,Metadata:&PodSandboxMetadata{Name:local-path-provisioner-648f6765c9-56j4z,Uid:72454afa-6441-4888-b636-fa1e4598bdcf,Namespace:local-path-storage,Attempt:0,},State:SANDBOX_READY,CreatedAt:1757338583366426465,Labels:map[string]string{app: local-path-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-56j4z,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 72454afa-6441-4888-b636-fa1e4598bdcf,pod-template-hash: 648f6765c9,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-08T13:36:22.616856008Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:50389f222239ee0c71c98b52f117006bc1555199bb141f1925bcc91654069d2b,Metadata:
&PodSandboxMetadata{Name:kube-ingress-dns-minikube,Uid:ab8c49c1-9369-40b5-88b2-c345b39fc2a1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1757338583339833508,Labels:map[string]string{app: minikube-ingress-dns,app.kubernetes.io/part-of: kube-system,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab8c49c1-9369-40b5-88b2-c345b39fc2a1,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"minikube-ingress-dns\",\"app.kubernetes.io/part-of\":\"kube-system\"},\"name\":\"kube-ingress-dns-minikube\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"DNS_PORT\",\"value\":\"53\"},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}}],\"image\":\"docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3
fc69d994b01ed6ab431def629f16ba1a89\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"minikube-ingress-dns\",\"ports\":[{\"containerPort\":53,\"hostPort\":53,\"protocol\":\"UDP\"}],\"volumeMounts\":[{\"mountPath\":\"/config\",\"name\":\"minikube-ingress-dns-config-volume\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"minikube-ingress-dns\",\"volumes\":[{\"configMap\":{\"name\":\"minikube-ingress-dns\"},\"name\":\"minikube-ingress-dns-config-volume\"}]}}\n,kubernetes.io/config.seen: 2025-09-08T13:36:21.934369223Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c181eb9b86ea37048963318eb56562ca66ccd70fb4545d9a8ce479efc69e4e68,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:03f506b1-95ba-4374-a259-a48aff54cbf3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1757338583336455643,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03f506b1-95ba-4374-a259-a48aff54cbf3,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-09-08T13:36:21.637525567Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d9e7ae24d99a996359fc1e863a6c0d9a54c152ab6dfed41c83c0bc5108b
e396c,Metadata:&PodSandboxMetadata{Name:amd-gpu-device-plugin-kzxl5,Uid:b3f56fe0-40a6-4ffe-b7de-7663f12a383a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1757338577906949137,Labels:map[string]string{controller-revision-hash: 7f87d6fd8d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: amd-gpu-device-plugin-kzxl5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3f56fe0-40a6-4ffe-b7de-7663f12a383a,k8s-app: amd-gpu-device-plugin,name: amd-gpu-device-plugin,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-08T13:36:17.562980073Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d11dfc1f6b48b221fa32b4843beea039899146aa46d08d6c3c3dd20e0c860645,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-jp2nv,Uid:62f949b8-763b-4d38-bd0b-435148046042,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1757338574586281682,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredn
s-66bc5c9577-jp2nv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62f949b8-763b-4d38-bd0b-435148046042,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-08T13:36:14.166813790Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ec9111b8bb7249aa752a9e8dda23d1811e7a7bc9f1a49876b155246f0529c0d9,Metadata:&PodSandboxMetadata{Name:kube-proxy-qr6fm,Uid:e15a8522-5091-4bff-b63d-188d2ecb8629,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1757338573759224653,Labels:map[string]string{controller-revision-hash: 6f475c7966,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-qr6fm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e15a8522-5091-4bff-b63d-188d2ecb8629,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-08T13:36:12.833050839Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:53620840ada814
2996f5e1637747df7a38b5e04c6aa6bf0728cac1430f4523ab,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-674449,Uid:b472955dcdc524b38e2239d290b22d25,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1757338561899850121,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-674449,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b472955dcdc524b38e2239d290b22d25,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b472955dcdc524b38e2239d290b22d25,kubernetes.io/config.seen: 2025-09-08T13:36:00.908692185Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:758e5e705749d41e6b81a0e542bb2845016334f116014979644ffbc40e8994ff,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-674449,Uid:543412c087226072ca67a129000c9f1d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1757338561885838771,Labels:map[string]string{component: kube-controller-manag
er,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-674449,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 543412c087226072ca67a129000c9f1d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 543412c087226072ca67a129000c9f1d,kubernetes.io/config.seen: 2025-09-08T13:36:00.908691167Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2e258ed4a4e52b978fde50e0fd231af3dafa7c70a96e4da0e2cbbbcce310f6ec,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-674449,Uid:1e22c20da7865995a36eb339e1e8004a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1757338561875922476,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-674449,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e22c20da7865995a36eb339e1e8004a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-addres
s.endpoint: 192.168.39.135:8443,kubernetes.io/config.hash: 1e22c20da7865995a36eb339e1e8004a,kubernetes.io/config.seen: 2025-09-08T13:36:00.908689964Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2a3fb6ae03d4987f136eab258302367d8249d037a03d416c5d49076645fc0bf8,Metadata:&PodSandboxMetadata{Name:etcd-addons-674449,Uid:391d65c4e76aa9fc172cf2177cf0d7ed,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1757338561872127004,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-674449,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 391d65c4e76aa9fc172cf2177cf0d7ed,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.135:2379,kubernetes.io/config.hash: 391d65c4e76aa9fc172cf2177cf0d7ed,kubernetes.io/config.seen: 2025-09-08T13:36:00.908684790Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=
c7644d19-9bdb-47b8-94da-0518fbdc60ee name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 08 13:42:12 addons-674449 crio[828]: time="2025-09-08 13:42:12.081364099Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6199a286-8133-49eb-ac0b-19612ced7c87 name=/runtime.v1.RuntimeService/Version
	Sep 08 13:42:12 addons-674449 crio[828]: time="2025-09-08 13:42:12.081458402Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6199a286-8133-49eb-ac0b-19612ced7c87 name=/runtime.v1.RuntimeService/Version
	Sep 08 13:42:12 addons-674449 crio[828]: time="2025-09-08 13:42:12.084428697Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=530996ef-7a5e-48c9-a365-619df182e334 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 13:42:12 addons-674449 crio[828]: time="2025-09-08 13:42:12.086498174Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757338932086461548,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:605485,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=530996ef-7a5e-48c9-a365-619df182e334 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 13:42:12 addons-674449 crio[828]: time="2025-09-08 13:42:12.089282458Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0e9f107c-a2c0-4c7e-881c-48c01a03825c name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 13:42:12 addons-674449 crio[828]: time="2025-09-08 13:42:12.089352200Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0e9f107c-a2c0-4c7e-881c-48c01a03825c name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 13:42:12 addons-674449 crio[828]: time="2025-09-08 13:42:12.089915902Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b15e64cf080128acad87e6e3f638c936387259da6213dccc02d5398677d94135,PodSandboxId:b2b244ac2fdcaf40d932d1fbe46ad8304880e1e32d17a40724309fad1a57f346,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1757338931876033033,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d498dc89-xtmkg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b0889bf2-d0c6-44fd-b019-b939fc33dc0c,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.p
orts: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:697ed374e1bbcd9508abdac530e549d3e892b1901488ee2778877d0dbb1f4bac,PodSandboxId:2dfb98b48df73417cb7f2e22d09c2603725d7f79e72178f2b1d5a7217ec9063a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9,State:CONTAINER_RUNNING,CreatedAt:1757338787230256380,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c90e9767-4367-4541-88a3-b800f3b971db,},Annotations:map[string]string{io.kubernete
s.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6bc7e5c021905fd911c1f158e9312ddbb1b89c5abfc8137585d79f9dfb290c0,PodSandboxId:c49fbe23eccc1b62393bab145ea2c84cee52d7bf3c973e493194d2ed9d541e75,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:bbca49510385effd4fe27be07dd12b845f530f6abbbaa06ef35ff7b4ae06cc39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3bd49f1f42c46a0def0e208037f718f7a122902ffa846fd7c2757e017c1ee29e,State:CONTAINER_RUNNING,CreatedAt:1757338778878669135,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-85f8f8dc54-kbn9j,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid
: 8dfed74d-34c1-4f83-90f0-a4ffb9d80a5b,},Annotations:map[string]string{io.kubernetes.container.hash: 22a1aabb,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1004edf588e41248d483d2cac0fd02dffb8e43b82f8ce3fdebe80f03808fbb38,PodSandboxId:921508025c05741eaa11ca5c8a4be6b635f6c1935725e1d178807c30143e2cf1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1757338736154254979,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kuberne
tes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f965c06e-67a7-4092-9b85-b30957e5cec1,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0b72d4d164370bd6629513e46dee6ec7a9df4e7f557f4e125cdcb77a1cc90f7,PodSandboxId:98576a25ad52fe9d5d37533cb76517f7efca247a477e7d54e811de4d6b2c0ac1,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1757338726286575663,Labels:map[string]string{io.kubernetes.container.name: controller,io.kuber
netes.pod.name: ingress-nginx-controller-9cc49f96f-8qxx6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b3e4024c-170b-496f-a454-a099404f18b5,},Annotations:map[string]string{io.kubernetes.container.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:f526ebd282920a84a7e42550aa834424d5e648ec9f63e9c9734bb15ab0e5d7f6,PodSandboxId:7816bfab9d63b2ae2c66a03199043de765e7e58df3135e8059ad5fe7b11f510a,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:8c217
da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1757338697151331109,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-tfcvr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 19fd6217-de22-4f1d-9a9a-d307484acc19,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63b6dd3790a7481420d56157ab73430d6765705b568dbe78ae1c0be03dffad1f,PodSandboxId:ff70bf87c082056b3898b5216b61a172496a4879878846fe3a97f395962e1fa4,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-ngi
nx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1757338683853319201,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-r7kz8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6985e98c-9486-46bc-817f-51244ea6dcea,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0db4a5aba1bccd47c10a1120cb72c12e134f9df58b4130d539d87beb6fa2370b,PodSandboxId:72f14cc46e23ec6583af04f4eea8f78ef7de65f9c2bd8561937d142685dc6229,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&Image
Spec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:5768867a6776f266b9c9c6b8b32a069f9346493f7bede50c3dcb28859f36d506,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c0845d92124611df4baa18fd96892b107dd9c28f6178793e020b10264622c27b,State:CONTAINER_RUNNING,CreatedAt:1757338676421514389,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-84fdd,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 310f336f-3bf3-4b6d-898f-9ca64c3c855b,},Annotations:map[string]string{io.kubernetes.container.hash: 8e2c4a14,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51c3ab7504df0bd6c39089f4677beed01f534414a632dcdab9bedbace8572817,PodSandboxId:e8ccb056ce5abc93eb61c2e7e8496416a
31a8c8a96b9905d2887c6a26f31af2c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1757338631964122090,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-56j4z,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 72454afa-6441-4888-b636-fa1e4598bdcf,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af088e13c26ac9ac7b1c86ce2e5c29fc79735c0540
1346447d5a22fe93704aea,PodSandboxId:50389f222239ee0c71c98b52f117006bc1555199bb141f1925bcc91654069d2b,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1757338623108492743,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab8c49c1-9369-40b5-88b2-c345b39fc2a1,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.t
erminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9b16fefaa769899c4fb445716899196e05b4f815dfa4a2679e54aa8d4984b81,PodSandboxId:c181eb9b86ea37048963318eb56562ca66ccd70fb4545d9a8ce479efc69e4e68,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1757338588394899408,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03f506b1-95ba-4374-a259-a48aff54cbf3,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMe
ssagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae8acff52bc93907f6c3a90560e466986e532548b4f683a0335da08384853fc5,PodSandboxId:d9e7ae24d99a996359fc1e863a6c0d9a54c152ab6dfed41c83c0bc5108be396c,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1757338583916113148,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-kzxl5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3f56fe0-40a6-4ffe-b7de-7663f12a383a,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67e1057220717ed9cdcdad38ab351e4e2cbafd1dda0b42ca854b04be1f9ec2cb,PodSandboxId:d11dfc1f6b48b221fa32b4843beea039899146aa46d08d6c3c3dd20e0c860645,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1757338575725317595,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-jp2nv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62f949b8-763b-4d38-bd0b-435148046042,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"
protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f12b9c2882a227d5713df9ab6340cd0fc18bd1cc6cb3d75c2b710ad07fe1b1ed,PodSandboxId:ec9111b8bb7249aa752a9e8dda23d1811e7a7bc9f1a49876b155246f0529c0d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1757338574188814617,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubern
etes.pod.name: kube-proxy-qr6fm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e15a8522-5091-4bff-b63d-188d2ecb8629,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d74d4f6083f161a043a27bb70ecdb49ea0b93c99cecf51776a44876f56736b4,PodSandboxId:53620840ada8142996f5e1637747df7a38b5e04c6aa6bf0728cac1430f4523ab,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1757338562204339881,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-schedu
ler-addons-674449,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b472955dcdc524b38e2239d290b22d25,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b2cb7fe6974af584b1f0aa27c54c03f298f4b567258fa89ca65b5e2448d5144,PodSandboxId:758e5e705749d41e6b81a0e542bb2845016334f116014979644ffbc40e8994ff,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1757338562
190574280,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-674449,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 543412c087226072ca67a129000c9f1d,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f9a21b035bc67d1c99924226b5e5b223df382ff4db2d3a808aa3c740551a11c,PodSandboxId:2e258ed4a4e52b978fde50e0fd231af3dafa7c70a96e4da0e2cbbbcce310f6ec,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandl
er:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1757338562137443196,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-674449,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e22c20da7865995a36eb339e1e8004a,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:584e534d9f2fb0783086698690ed95dcac9ba84c5b393b9779d0d7204087bc1b,PodSandboxId:2a3fb6ae03d4987f136eab258302367d8249d037a03d416c5d49076645fc0bf8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d1
6e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1757338562118795572,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-674449,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 391d65c4e76aa9fc172cf2177cf0d7ed,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0e9f107c-a2c0-4c7e-881c-48c01a03825c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	b15e64cf08012       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        Less than a second ago   Running             hello-world-app           0                   b2b244ac2fdca       hello-world-app-5d498dc89-xtmkg
	697ed374e1bbc       docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8                              2 minutes ago            Running             nginx                     0                   2dfb98b48df73       nginx
	e6bc7e5c02190       ghcr.io/headlamp-k8s/headlamp@sha256:bbca49510385effd4fe27be07dd12b845f530f6abbbaa06ef35ff7b4ae06cc39                        2 minutes ago            Running             headlamp                  0                   c49fbe23eccc1       headlamp-85f8f8dc54-kbn9j
	1004edf588e41       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago            Running             busybox                   0                   921508025c057       busybox
	c0b72d4d16437       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef             3 minutes ago            Running             controller                0                   98576a25ad52f       ingress-nginx-controller-9cc49f96f-8qxx6
	f526ebd282920       8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65                                                             3 minutes ago            Exited              patch                     2                   7816bfab9d63b       ingress-nginx-admission-patch-tfcvr
	63b6dd3790a74       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   4 minutes ago            Exited              create                    0                   ff70bf87c0820       ingress-nginx-admission-create-r7kz8
	0db4a5aba1bcc       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:5768867a6776f266b9c9c6b8b32a069f9346493f7bede50c3dcb28859f36d506            4 minutes ago            Running             gadget                    0                   72f14cc46e23e       gadget-84fdd
	51c3ab7504df0       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             5 minutes ago            Running             local-path-provisioner    0                   e8ccb056ce5ab       local-path-provisioner-648f6765c9-56j4z
	af088e13c26ac       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               5 minutes ago            Running             minikube-ingress-dns      0                   50389f222239e       kube-ingress-dns-minikube
	b9b16fefaa769       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago            Running             storage-provisioner       0                   c181eb9b86ea3       storage-provisioner
	ae8acff52bc93       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     5 minutes ago            Running             amd-gpu-device-plugin     0                   d9e7ae24d99a9       amd-gpu-device-plugin-kzxl5
	67e1057220717       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             5 minutes ago            Running             coredns                   0                   d11dfc1f6b48b       coredns-66bc5c9577-jp2nv
	f12b9c2882a22       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                                             5 minutes ago            Running             kube-proxy                0                   ec9111b8bb724       kube-proxy-qr6fm
	3d74d4f6083f1       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                                             6 minutes ago            Running             kube-scheduler            0                   53620840ada81       kube-scheduler-addons-674449
	8b2cb7fe6974a       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                                             6 minutes ago            Running             kube-controller-manager   0                   758e5e705749d       kube-controller-manager-addons-674449
	8f9a21b035bc6       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                                             6 minutes ago            Running             kube-apiserver            0                   2e258ed4a4e52       kube-apiserver-addons-674449
	584e534d9f2fb       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             6 minutes ago            Running             etcd                      0                   2a3fb6ae03d49       etcd-addons-674449
	
	
	==> coredns [67e1057220717ed9cdcdad38ab351e4e2cbafd1dda0b42ca854b04be1f9ec2cb] <==
	[INFO] 10.244.0.9:48508 - 28908 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.002443482s
	[INFO] 10.244.0.9:48508 - 51791 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000119611s
	[INFO] 10.244.0.9:48508 - 55191 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000870891s
	[INFO] 10.244.0.9:48508 - 37162 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000084926s
	[INFO] 10.244.0.9:48508 - 44228 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000589443s
	[INFO] 10.244.0.9:48508 - 55365 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000201227s
	[INFO] 10.244.0.9:48508 - 6833 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000172569s
	[INFO] 10.244.0.9:40328 - 32085 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000175456s
	[INFO] 10.244.0.9:40328 - 32378 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000153763s
	[INFO] 10.244.0.9:60955 - 30488 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000249578s
	[INFO] 10.244.0.9:60955 - 30234 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000389379s
	[INFO] 10.244.0.9:36161 - 26167 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00023885s
	[INFO] 10.244.0.9:36161 - 25902 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000378526s
	[INFO] 10.244.0.9:42430 - 58057 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000125409s
	[INFO] 10.244.0.9:42430 - 57609 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000242122s
	[INFO] 10.244.0.23:45799 - 64541 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000433175s
	[INFO] 10.244.0.23:49374 - 1568 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000130734s
	[INFO] 10.244.0.23:44168 - 45348 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000130462s
	[INFO] 10.244.0.23:52808 - 60081 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000321716s
	[INFO] 10.244.0.23:58134 - 59616 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000137326s
	[INFO] 10.244.0.23:45272 - 16425 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000129649s
	[INFO] 10.244.0.23:42954 - 20749 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001373568s
	[INFO] 10.244.0.23:58224 - 60546 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.00433701s
	[INFO] 10.244.0.28:34307 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000859827s
	[INFO] 10.244.0.28:45796 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000110713s
	
	
	==> describe nodes <==
	Name:               addons-674449
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-674449
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3f6dd380c737091fd766d425b85ffa6c4f72b9ba
	                    minikube.k8s.io/name=addons-674449
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_08T13_36_08_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-674449
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Sep 2025 13:36:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-674449
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Sep 2025 13:42:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Sep 2025 13:40:13 +0000   Mon, 08 Sep 2025 13:36:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Sep 2025 13:40:13 +0000   Mon, 08 Sep 2025 13:36:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Sep 2025 13:40:13 +0000   Mon, 08 Sep 2025 13:36:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Sep 2025 13:40:13 +0000   Mon, 08 Sep 2025 13:36:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.135
	  Hostname:    addons-674449
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008584Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008584Ki
	  pods:               110
	System Info:
	  Machine ID:                 356baa17320a429db58d9b2bf6a59188
	  System UUID:                356baa17-320a-429d-b58d-9b2bf6a59188
	  Boot ID:                    6f6d7469-8c81-4279-945f-1e977fbe14d9
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (16 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m19s
	  default                     hello-world-app-5d498dc89-xtmkg             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m40s
	  gadget                      gadget-84fdd                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m50s
	  headlamp                    headlamp-85f8f8dc54-kbn9j                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m49s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-8qxx6    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         5m48s
	  kube-system                 amd-gpu-device-plugin-kzxl5                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m55s
	  kube-system                 coredns-66bc5c9577-jp2nv                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m58s
	  kube-system                 etcd-addons-674449                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         6m7s
	  kube-system                 kube-apiserver-addons-674449                250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m4s
	  kube-system                 kube-controller-manager-addons-674449       200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m4s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m51s
	  kube-system                 kube-proxy-qr6fm                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m
	  kube-system                 kube-scheduler-addons-674449                100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m6s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m51s
	  local-path-storage          local-path-provisioner-648f6765c9-56j4z     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m56s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  6m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m11s (x8 over 6m12s)  kubelet          Node addons-674449 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m11s (x8 over 6m12s)  kubelet          Node addons-674449 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m11s (x7 over 6m12s)  kubelet          Node addons-674449 status is now: NodeHasSufficientPID
	  Normal  Starting                 6m5s                   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m4s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m4s                   kubelet          Node addons-674449 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m4s                   kubelet          Node addons-674449 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m4s                   kubelet          Node addons-674449 status is now: NodeHasSufficientPID
	  Normal  NodeReady                6m4s                   kubelet          Node addons-674449 status is now: NodeReady
	  Normal  RegisteredNode           6m                     node-controller  Node addons-674449 event: Registered Node addons-674449 in Controller
	
	
	==> dmesg <==
	[  +0.036546] kauditd_printk_skb: 125 callbacks suppressed
	[  +0.975212] kauditd_printk_skb: 429 callbacks suppressed
	[ +14.341236] kauditd_printk_skb: 181 callbacks suppressed
	[  +6.186514] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.193365] kauditd_printk_skb: 32 callbacks suppressed
	[Sep 8 13:37] kauditd_printk_skb: 20 callbacks suppressed
	[  +9.437781] kauditd_printk_skb: 32 callbacks suppressed
	[Sep 8 13:38] kauditd_printk_skb: 20 callbacks suppressed
	[  +2.153943] kauditd_printk_skb: 101 callbacks suppressed
	[  +0.976781] kauditd_printk_skb: 114 callbacks suppressed
	[  +5.168004] kauditd_printk_skb: 72 callbacks suppressed
	[  +0.001150] kauditd_printk_skb: 11 callbacks suppressed
	[  +7.274381] kauditd_printk_skb: 29 callbacks suppressed
	[  +0.000205] kauditd_printk_skb: 74 callbacks suppressed
	[  +5.829525] kauditd_printk_skb: 47 callbacks suppressed
	[Sep 8 13:39] kauditd_printk_skb: 22 callbacks suppressed
	[  +0.061803] kauditd_printk_skb: 76 callbacks suppressed
	[  +1.182946] kauditd_printk_skb: 149 callbacks suppressed
	[  +1.000205] kauditd_printk_skb: 159 callbacks suppressed
	[  +3.315314] kauditd_printk_skb: 131 callbacks suppressed
	[  +7.084760] kauditd_printk_skb: 49 callbacks suppressed
	[  +8.539171] kauditd_printk_skb: 26 callbacks suppressed
	[  +0.000957] kauditd_printk_skb: 10 callbacks suppressed
	[Sep 8 13:40] kauditd_printk_skb: 41 callbacks suppressed
	[Sep 8 13:42] kauditd_printk_skb: 127 callbacks suppressed
	
	
	==> etcd [584e534d9f2fb0783086698690ed95dcac9ba84c5b393b9779d0d7204087bc1b] <==
	{"level":"warn","ts":"2025-09-08T13:38:07.061853Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"301.056106ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-08T13:38:07.061873Z","caller":"traceutil/trace.go:172","msg":"trace[805429750] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1126; }","duration":"301.091553ms","start":"2025-09-08T13:38:06.760776Z","end":"2025-09-08T13:38:07.061867Z","steps":["trace[805429750] 'agreement among raft nodes before linearized reading'  (duration: 301.026031ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-08T13:38:07.061902Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-08T13:38:06.760753Z","time spent":"301.136179ms","remote":"127.0.0.1:38592","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-09-08T13:38:07.062146Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"299.225236ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-08T13:38:07.062226Z","caller":"traceutil/trace.go:172","msg":"trace[39428492] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1127; }","duration":"299.310425ms","start":"2025-09-08T13:38:06.762907Z","end":"2025-09-08T13:38:07.062217Z","steps":["trace[39428492] 'agreement among raft nodes before linearized reading'  (duration: 299.212173ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T13:38:07.062323Z","caller":"traceutil/trace.go:172","msg":"trace[1932976656] transaction","detail":"{read_only:false; response_revision:1127; number_of_response:1; }","duration":"320.187568ms","start":"2025-09-08T13:38:06.742128Z","end":"2025-09-08T13:38:07.062315Z","steps":["trace[1932976656] 'process raft request'  (duration: 319.685215ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-08T13:38:07.062442Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-08T13:38:06.742108Z","time spent":"320.235259ms","remote":"127.0.0.1:38688","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4479,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/jobs/ingress-nginx/ingress-nginx-admission-create\" mod_revision:732 > success:<request_put:<key:\"/registry/jobs/ingress-nginx/ingress-nginx-admission-create\" value_size:4412 >> failure:<request_range:<key:\"/registry/jobs/ingress-nginx/ingress-nginx-admission-create\" > >"}
	{"level":"warn","ts":"2025-09-08T13:38:07.062636Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"168.522233ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.135\" limit:1 ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2025-09-08T13:38:07.062666Z","caller":"traceutil/trace.go:172","msg":"trace[37969440] range","detail":"{range_begin:/registry/masterleases/192.168.39.135; range_end:; response_count:1; response_revision:1127; }","duration":"168.666933ms","start":"2025-09-08T13:38:06.893992Z","end":"2025-09-08T13:38:07.062659Z","steps":["trace[37969440] 'agreement among raft nodes before linearized reading'  (duration: 168.473084ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-08T13:38:07.062832Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"214.529646ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/gadget/gadget\" limit:1 ","response":"range_response_count:1 size:10086"}
	{"level":"info","ts":"2025-09-08T13:38:07.062851Z","caller":"traceutil/trace.go:172","msg":"trace[72823110] range","detail":"{range_begin:/registry/daemonsets/gadget/gadget; range_end:; response_count:1; response_revision:1127; }","duration":"214.548409ms","start":"2025-09-08T13:38:06.848296Z","end":"2025-09-08T13:38:07.062845Z","steps":["trace[72823110] 'agreement among raft nodes before linearized reading'  (duration: 214.460256ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-08T13:38:42.174382Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"170.584635ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-08T13:38:42.174518Z","caller":"traceutil/trace.go:172","msg":"trace[1526947476] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1254; }","duration":"170.72927ms","start":"2025-09-08T13:38:42.003776Z","end":"2025-09-08T13:38:42.174505Z","steps":["trace[1526947476] 'range keys from in-memory index tree'  (duration: 170.432361ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-08T13:38:42.174819Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"146.545488ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-08T13:38:42.174841Z","caller":"traceutil/trace.go:172","msg":"trace[160553070] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1254; }","duration":"146.570129ms","start":"2025-09-08T13:38:42.028265Z","end":"2025-09-08T13:38:42.174835Z","steps":["trace[160553070] 'range keys from in-memory index tree'  (duration: 146.501082ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T13:38:43.556996Z","caller":"traceutil/trace.go:172","msg":"trace[140891176] transaction","detail":"{read_only:false; response_revision:1264; number_of_response:1; }","duration":"253.599486ms","start":"2025-09-08T13:38:43.303379Z","end":"2025-09-08T13:38:43.556979Z","steps":["trace[140891176] 'process raft request'  (duration: 171.439585ms)","trace[140891176] 'compare'  (duration: 81.550395ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-08T13:39:20.755446Z","caller":"traceutil/trace.go:172","msg":"trace[660818627] transaction","detail":"{read_only:false; response_revision:1477; number_of_response:1; }","duration":"286.004173ms","start":"2025-09-08T13:39:20.469423Z","end":"2025-09-08T13:39:20.755428Z","steps":["trace[660818627] 'process raft request'  (duration: 285.84093ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T13:39:36.452225Z","caller":"traceutil/trace.go:172","msg":"trace[2124234975] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1646; }","duration":"145.011598ms","start":"2025-09-08T13:39:36.307165Z","end":"2025-09-08T13:39:36.452177Z","steps":["trace[2124234975] 'process raft request'  (duration: 90.036898ms)","trace[2124234975] 'compare'  (duration: 54.736322ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-08T13:39:40.201127Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.747708ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-08T13:39:40.201210Z","caller":"traceutil/trace.go:172","msg":"trace[73737848] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1682; }","duration":"126.927051ms","start":"2025-09-08T13:39:40.074268Z","end":"2025-09-08T13:39:40.201195Z","steps":["trace[73737848] 'range keys from in-memory index tree'  (duration: 126.621326ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-08T13:39:47.104412Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"162.846151ms","expected-duration":"100ms","prefix":"","request":"header:<ID:18157837676323013339 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.39.135\" mod_revision:1657 > success:<request_put:<key:\"/registry/masterleases/192.168.39.135\" value_size:67 lease:8934465639468237529 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.135\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-09-08T13:39:47.104509Z","caller":"traceutil/trace.go:172","msg":"trace[486843855] linearizableReadLoop","detail":"{readStateIndex:1768; appliedIndex:1767; }","duration":"136.421357ms","start":"2025-09-08T13:39:46.968078Z","end":"2025-09-08T13:39:47.104500Z","steps":["trace[486843855] 'read index received'  (duration: 27.016µs)","trace[486843855] 'applied index is now lower than readState.Index'  (duration: 136.393358ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-08T13:39:47.105261Z","caller":"traceutil/trace.go:172","msg":"trace[853037638] transaction","detail":"{read_only:false; response_revision:1695; number_of_response:1; }","duration":"189.885404ms","start":"2025-09-08T13:39:46.914688Z","end":"2025-09-08T13:39:47.104573Z","steps":["trace[853037638] 'process raft request'  (duration: 26.500219ms)","trace[853037638] 'compare'  (duration: 162.657455ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-08T13:39:47.107522Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"139.426898ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2025-09-08T13:39:47.107777Z","caller":"traceutil/trace.go:172","msg":"trace[1541096732] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1695; }","duration":"139.692493ms","start":"2025-09-08T13:39:46.968075Z","end":"2025-09-08T13:39:47.107767Z","steps":["trace[1541096732] 'agreement among raft nodes before linearized reading'  (duration: 139.174134ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:42:12 up 6 min,  0 users,  load average: 1.06, 1.40, 0.81
	Linux addons-674449 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep  4 13:14:36 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [8f9a21b035bc67d1c99924226b5e5b223df382ff4db2d3a808aa3c740551a11c] <==
	E0908 13:39:03.574157       1 conn.go:339] Error on socket receive: read tcp 192.168.39.135:8443->192.168.39.1:35758: use of closed network connection
	E0908 13:39:03.784033       1 conn.go:339] Error on socket receive: read tcp 192.168.39.135:8443->192.168.39.1:35778: use of closed network connection
	I0908 13:39:23.636812       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.100.122.249"}
	I0908 13:39:30.245816       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 13:39:32.428371       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0908 13:39:32.629376       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.98.64.180"}
	I0908 13:39:47.163700       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 13:39:52.140831       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0908 13:39:57.569011       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0908 13:40:07.773660       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0908 13:40:07.773827       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0908 13:40:07.810663       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0908 13:40:07.810782       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0908 13:40:07.823393       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0908 13:40:07.823444       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0908 13:40:07.853902       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0908 13:40:07.854017       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0908 13:40:07.901638       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0908 13:40:07.901691       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0908 13:40:08.824773       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0908 13:40:08.904802       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0908 13:40:09.033383       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0908 13:40:51.415110       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 13:41:12.311067       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 13:42:10.460278       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.108.82.72"}
	
	
	==> kube-controller-manager [8b2cb7fe6974af584b1f0aa27c54c03f298f4b567258fa89ca65b5e2448d5144] <==
	E0908 13:40:17.585666       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 13:40:17.826058       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 13:40:17.827204       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 13:40:18.122175       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 13:40:18.123675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 13:40:25.274759       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 13:40:25.275851       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 13:40:26.972481       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 13:40:26.973673       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 13:40:27.139812       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 13:40:27.141414       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 13:40:41.502015       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 13:40:41.503194       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 13:40:44.110423       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 13:40:44.111820       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 13:40:50.145366       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 13:40:50.146558       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 13:41:15.524976       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 13:41:15.526122       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 13:41:32.511897       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 13:41:32.513196       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 13:41:37.904702       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 13:41:37.906363       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 13:42:07.568215       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 13:42:07.569696       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [f12b9c2882a227d5713df9ab6340cd0fc18bd1cc6cb3d75c2b710ad07fe1b1ed] <==
	I0908 13:36:15.064344       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0908 13:36:15.166992       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0908 13:36:15.167031       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.135"]
	E0908 13:36:15.167140       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 13:36:15.552965       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0908 13:36:15.553437       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0908 13:36:15.553535       1 server_linux.go:132] "Using iptables Proxier"
	I0908 13:36:15.587213       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 13:36:15.587562       1 server.go:527] "Version info" version="v1.34.0"
	I0908 13:36:15.587653       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 13:36:15.616007       1 config.go:200] "Starting service config controller"
	I0908 13:36:15.616043       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 13:36:15.616091       1 config.go:106] "Starting endpoint slice config controller"
	I0908 13:36:15.616096       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 13:36:15.616108       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 13:36:15.616113       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0908 13:36:15.621261       1 config.go:309] "Starting node config controller"
	I0908 13:36:15.621389       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 13:36:15.621399       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0908 13:36:15.717053       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0908 13:36:15.717096       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0908 13:36:15.717149       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3d74d4f6083f161a043a27bb70ecdb49ea0b93c99cecf51776a44876f56736b4] <==
	E0908 13:36:04.887315       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0908 13:36:04.891166       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0908 13:36:04.891264       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0908 13:36:04.891325       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0908 13:36:04.891380       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0908 13:36:04.891448       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0908 13:36:04.889065       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0908 13:36:04.892352       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0908 13:36:04.891657       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0908 13:36:05.714476       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0908 13:36:05.741039       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0908 13:36:05.806530       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0908 13:36:05.876143       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0908 13:36:05.951828       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0908 13:36:05.956126       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0908 13:36:06.035720       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0908 13:36:06.035719       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0908 13:36:06.081003       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0908 13:36:06.084461       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0908 13:36:06.109181       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0908 13:36:06.145287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0908 13:36:06.180167       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0908 13:36:06.199970       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0908 13:36:06.228640       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	I0908 13:36:08.761720       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 08 13:40:28 addons-674449 kubelet[1509]: E0908 13:40:28.217367    1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757338828216843792  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 08 13:40:28 addons-674449 kubelet[1509]: E0908 13:40:28.217423    1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757338828216843792  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 08 13:40:38 addons-674449 kubelet[1509]: E0908 13:40:38.220436    1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757338838219915240  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 08 13:40:38 addons-674449 kubelet[1509]: E0908 13:40:38.220484    1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757338838219915240  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 08 13:40:48 addons-674449 kubelet[1509]: E0908 13:40:48.223983    1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757338848223112130  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 08 13:40:48 addons-674449 kubelet[1509]: E0908 13:40:48.224100    1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757338848223112130  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 08 13:40:58 addons-674449 kubelet[1509]: E0908 13:40:58.227827    1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757338858227161495  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 08 13:40:58 addons-674449 kubelet[1509]: E0908 13:40:58.228145    1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757338858227161495  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 08 13:41:07 addons-674449 kubelet[1509]: I0908 13:41:07.962027    1509 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Sep 08 13:41:08 addons-674449 kubelet[1509]: E0908 13:41:08.231892    1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757338868231044984  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 08 13:41:08 addons-674449 kubelet[1509]: E0908 13:41:08.231916    1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757338868231044984  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 08 13:41:18 addons-674449 kubelet[1509]: E0908 13:41:18.237162    1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757338878236197211  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 08 13:41:18 addons-674449 kubelet[1509]: E0908 13:41:18.237500    1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757338878236197211  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 08 13:41:28 addons-674449 kubelet[1509]: E0908 13:41:28.241281    1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757338888240790471  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 08 13:41:28 addons-674449 kubelet[1509]: E0908 13:41:28.241698    1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757338888240790471  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 08 13:41:38 addons-674449 kubelet[1509]: E0908 13:41:38.244896    1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757338898244310725  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 08 13:41:38 addons-674449 kubelet[1509]: E0908 13:41:38.244952    1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757338898244310725  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 08 13:41:47 addons-674449 kubelet[1509]: I0908 13:41:47.962922    1509 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-kzxl5" secret="" err="secret \"gcp-auth\" not found"
	Sep 08 13:41:48 addons-674449 kubelet[1509]: E0908 13:41:48.250076    1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757338908249445636  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 08 13:41:48 addons-674449 kubelet[1509]: E0908 13:41:48.250158    1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757338908249445636  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 08 13:41:58 addons-674449 kubelet[1509]: E0908 13:41:58.253287    1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757338918252690862  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 08 13:41:58 addons-674449 kubelet[1509]: E0908 13:41:58.253314    1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757338918252690862  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 08 13:42:08 addons-674449 kubelet[1509]: E0908 13:42:08.257688    1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757338928257061134  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 08 13:42:08 addons-674449 kubelet[1509]: E0908 13:42:08.257763    1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757338928257061134  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 08 13:42:10 addons-674449 kubelet[1509]: I0908 13:42:10.418916    1509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tksgd\" (UniqueName: \"kubernetes.io/projected/b0889bf2-d0c6-44fd-b019-b939fc33dc0c-kube-api-access-tksgd\") pod \"hello-world-app-5d498dc89-xtmkg\" (UID: \"b0889bf2-d0c6-44fd-b019-b939fc33dc0c\") " pod="default/hello-world-app-5d498dc89-xtmkg"
	
	
	==> storage-provisioner [b9b16fefaa769899c4fb445716899196e05b4f815dfa4a2679e54aa8d4984b81] <==
	W0908 13:41:47.989429       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:41:49.993174       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:41:49.999059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:41:52.003961       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:41:52.012902       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:41:54.017684       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:41:54.024949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:41:56.029228       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:41:56.040099       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:41:58.043671       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:41:58.050486       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:42:00.053759       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:42:00.063454       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:42:02.067131       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:42:02.075024       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:42:04.079700       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:42:04.088451       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:42:06.092040       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:42:06.099747       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:42:08.103780       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:42:08.117376       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:42:10.121459       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:42:10.127545       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:42:12.131106       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:42:12.140337       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-674449 -n addons-674449
helpers_test.go:269: (dbg) Run:  kubectl --context addons-674449 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-r7kz8 ingress-nginx-admission-patch-tfcvr
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-674449 describe pod ingress-nginx-admission-create-r7kz8 ingress-nginx-admission-patch-tfcvr
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-674449 describe pod ingress-nginx-admission-create-r7kz8 ingress-nginx-admission-patch-tfcvr: exit status 1 (63.847962ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-r7kz8" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-tfcvr" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-674449 describe pod ingress-nginx-admission-create-r7kz8 ingress-nginx-admission-patch-tfcvr: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-674449 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-674449 addons disable ingress-dns --alsologtostderr -v=1: (1.484947199s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-674449 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-674449 addons disable ingress --alsologtostderr -v=1: (7.912633978s)
--- FAIL: TestAddons/parallel/Ingress (170.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (2.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 image ls --format short --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-864151 image ls --format short --alsologtostderr: (2.269707423s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-864151 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-864151 image ls --format short --alsologtostderr:
I0908 13:49:58.698215 1130006 out.go:360] Setting OutFile to fd 1 ...
I0908 13:49:58.698590 1130006 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 13:49:58.698606 1130006 out.go:374] Setting ErrFile to fd 2...
I0908 13:49:58.698613 1130006 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 13:49:58.699026 1130006 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-1116714/.minikube/bin
I0908 13:49:58.699993 1130006 config.go:182] Loaded profile config "functional-864151": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 13:49:58.700169 1130006 config.go:182] Loaded profile config "functional-864151": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 13:49:58.700892 1130006 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
I0908 13:49:58.700995 1130006 main.go:141] libmachine: Launching plugin server for driver kvm2
I0908 13:49:58.718329 1130006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43139
I0908 13:49:58.718951 1130006 main.go:141] libmachine: () Calling .GetVersion
I0908 13:49:58.719690 1130006 main.go:141] libmachine: Using API Version  1
I0908 13:49:58.719716 1130006 main.go:141] libmachine: () Calling .SetConfigRaw
I0908 13:49:58.720223 1130006 main.go:141] libmachine: () Calling .GetMachineName
I0908 13:49:58.720532 1130006 main.go:141] libmachine: (functional-864151) Calling .GetState
I0908 13:49:58.722892 1130006 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
I0908 13:49:58.722945 1130006 main.go:141] libmachine: Launching plugin server for driver kvm2
I0908 13:49:58.739956 1130006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34795
I0908 13:49:58.740522 1130006 main.go:141] libmachine: () Calling .GetVersion
I0908 13:49:58.741034 1130006 main.go:141] libmachine: Using API Version  1
I0908 13:49:58.741061 1130006 main.go:141] libmachine: () Calling .SetConfigRaw
I0908 13:49:58.741421 1130006 main.go:141] libmachine: () Calling .GetMachineName
I0908 13:49:58.741621 1130006 main.go:141] libmachine: (functional-864151) Calling .DriverName
I0908 13:49:58.741870 1130006 ssh_runner.go:195] Run: systemctl --version
I0908 13:49:58.741895 1130006 main.go:141] libmachine: (functional-864151) Calling .GetSSHHostname
I0908 13:49:58.745255 1130006 main.go:141] libmachine: (functional-864151) DBG | domain functional-864151 has defined MAC address 52:54:00:9a:17:49 in network mk-functional-864151
I0908 13:49:58.745683 1130006 main.go:141] libmachine: (functional-864151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:17:49", ip: ""} in network mk-functional-864151: {Iface:virbr1 ExpiryTime:2025-09-08 14:46:34 +0000 UTC Type:0 Mac:52:54:00:9a:17:49 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:functional-864151 Clientid:01:52:54:00:9a:17:49}
I0908 13:49:58.745721 1130006 main.go:141] libmachine: (functional-864151) DBG | domain functional-864151 has defined IP address 192.168.39.136 and MAC address 52:54:00:9a:17:49 in network mk-functional-864151
I0908 13:49:58.745872 1130006 main.go:141] libmachine: (functional-864151) Calling .GetSSHPort
I0908 13:49:58.746099 1130006 main.go:141] libmachine: (functional-864151) Calling .GetSSHKeyPath
I0908 13:49:58.746299 1130006 main.go:141] libmachine: (functional-864151) Calling .GetSSHUsername
I0908 13:49:58.746464 1130006 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/functional-864151/id_rsa Username:docker}
I0908 13:49:58.851629 1130006 ssh_runner.go:195] Run: sudo crictl images --output json
I0908 13:50:00.907976 1130006 ssh_runner.go:235] Completed: sudo crictl images --output json: (2.056289143s)
W0908 13:50:00.908083 1130006 cache_images.go:735] Failed to list images for profile functional-864151 crictl images: sudo crictl images --output json: Process exited with status 1
stdout:

                                                
                                                
stderr:
E0908 13:50:00.898853    9564 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="&ImageFilter{Image:&ImageSpec{Image:,Annotations:map[string]string{},UserSpecifiedImage:,},}"
time="2025-09-08T13:50:00Z" level=fatal msg="listing images: rpc error: code = DeadlineExceeded desc = context deadline exceeded"
I0908 13:50:00.908128 1130006 main.go:141] libmachine: Making call to close driver server
I0908 13:50:00.908143 1130006 main.go:141] libmachine: (functional-864151) Calling .Close
I0908 13:50:00.908568 1130006 main.go:141] libmachine: Successfully made call to close driver server
I0908 13:50:00.908598 1130006 main.go:141] libmachine: (functional-864151) DBG | Closing plugin on server side
I0908 13:50:00.908615 1130006 main.go:141] libmachine: Making call to close connection to plugin binary
I0908 13:50:00.908634 1130006 main.go:141] libmachine: Making call to close driver server
I0908 13:50:00.908644 1130006 main.go:141] libmachine: (functional-864151) Calling .Close
I0908 13:50:00.908905 1130006 main.go:141] libmachine: Successfully made call to close driver server
I0908 13:50:00.908924 1130006 main.go:141] libmachine: Making call to close connection to plugin binary
I0908 13:50:00.908945 1130006 main.go:141] libmachine: (functional-864151) DBG | Closing plugin on server side
functional_test.go:290: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (2.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (2.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 image ls --format yaml --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-864151 image ls --format yaml --alsologtostderr: (2.318124291s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-864151 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-864151 image ls --format yaml --alsologtostderr:
I0908 13:50:00.970781 1130053 out.go:360] Setting OutFile to fd 1 ...
I0908 13:50:00.971064 1130053 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 13:50:00.971076 1130053 out.go:374] Setting ErrFile to fd 2...
I0908 13:50:00.971080 1130053 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 13:50:00.971340 1130053 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-1116714/.minikube/bin
I0908 13:50:00.972041 1130053 config.go:182] Loaded profile config "functional-864151": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 13:50:00.972150 1130053 config.go:182] Loaded profile config "functional-864151": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 13:50:00.972552 1130053 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
I0908 13:50:00.972629 1130053 main.go:141] libmachine: Launching plugin server for driver kvm2
I0908 13:50:00.988956 1130053 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39615
I0908 13:50:00.989520 1130053 main.go:141] libmachine: () Calling .GetVersion
I0908 13:50:00.990251 1130053 main.go:141] libmachine: Using API Version  1
I0908 13:50:00.990303 1130053 main.go:141] libmachine: () Calling .SetConfigRaw
I0908 13:50:00.990768 1130053 main.go:141] libmachine: () Calling .GetMachineName
I0908 13:50:00.991022 1130053 main.go:141] libmachine: (functional-864151) Calling .GetState
I0908 13:50:00.993161 1130053 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
I0908 13:50:00.993225 1130053 main.go:141] libmachine: Launching plugin server for driver kvm2
I0908 13:50:01.009991 1130053 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38741
I0908 13:50:01.010509 1130053 main.go:141] libmachine: () Calling .GetVersion
I0908 13:50:01.011131 1130053 main.go:141] libmachine: Using API Version  1
I0908 13:50:01.011169 1130053 main.go:141] libmachine: () Calling .SetConfigRaw
I0908 13:50:01.011558 1130053 main.go:141] libmachine: () Calling .GetMachineName
I0908 13:50:01.011793 1130053 main.go:141] libmachine: (functional-864151) Calling .DriverName
I0908 13:50:01.012022 1130053 ssh_runner.go:195] Run: systemctl --version
I0908 13:50:01.012055 1130053 main.go:141] libmachine: (functional-864151) Calling .GetSSHHostname
I0908 13:50:01.015768 1130053 main.go:141] libmachine: (functional-864151) DBG | domain functional-864151 has defined MAC address 52:54:00:9a:17:49 in network mk-functional-864151
I0908 13:50:01.016210 1130053 main.go:141] libmachine: (functional-864151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:17:49", ip: ""} in network mk-functional-864151: {Iface:virbr1 ExpiryTime:2025-09-08 14:46:34 +0000 UTC Type:0 Mac:52:54:00:9a:17:49 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:functional-864151 Clientid:01:52:54:00:9a:17:49}
I0908 13:50:01.016249 1130053 main.go:141] libmachine: (functional-864151) DBG | domain functional-864151 has defined IP address 192.168.39.136 and MAC address 52:54:00:9a:17:49 in network mk-functional-864151
I0908 13:50:01.016422 1130053 main.go:141] libmachine: (functional-864151) Calling .GetSSHPort
I0908 13:50:01.016605 1130053 main.go:141] libmachine: (functional-864151) Calling .GetSSHKeyPath
I0908 13:50:01.016799 1130053 main.go:141] libmachine: (functional-864151) Calling .GetSSHUsername
I0908 13:50:01.017025 1130053 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/functional-864151/id_rsa Username:docker}
I0908 13:50:01.117407 1130053 ssh_runner.go:195] Run: sudo crictl images --output json
I0908 13:50:03.177586 1130053 ssh_runner.go:235] Completed: sudo crictl images --output json: (2.060120927s)
W0908 13:50:03.177676 1130053 cache_images.go:735] Failed to list images for profile functional-864151 crictl images: sudo crictl images --output json: Process exited with status 1
stdout:

                                                
                                                
stderr:
E0908 13:50:03.166211    9599 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="&ImageFilter{Image:&ImageSpec{Image:,Annotations:map[string]string{},UserSpecifiedImage:,},}"
time="2025-09-08T13:50:03Z" level=fatal msg="listing images: rpc error: code = DeadlineExceeded desc = context deadline exceeded"
I0908 13:50:03.177752 1130053 main.go:141] libmachine: Making call to close driver server
I0908 13:50:03.177763 1130053 main.go:141] libmachine: (functional-864151) Calling .Close
I0908 13:50:03.178080 1130053 main.go:141] libmachine: Successfully made call to close driver server
I0908 13:50:03.178102 1130053 main.go:141] libmachine: Making call to close connection to plugin binary
I0908 13:50:03.178111 1130053 main.go:141] libmachine: Making call to close driver server
I0908 13:50:03.178120 1130053 main.go:141] libmachine: (functional-864151) Calling .Close
I0908 13:50:03.178136 1130053 main.go:141] libmachine: (functional-864151) DBG | Closing plugin on server side
I0908 13:50:03.178418 1130053 main.go:141] libmachine: (functional-864151) DBG | Closing plugin on server side
I0908 13:50:03.178428 1130053 main.go:141] libmachine: Successfully made call to close driver server
I0908 13:50:03.178470 1130053 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:290: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (2.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-864151 ssh pgrep buildkitd: exit status 1 (220.888669ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 image build -t localhost/my-image:functional-864151 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-864151 image build -t localhost/my-image:functional-864151 testdata/build --alsologtostderr: (3.613970036s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-864151 image build -t localhost/my-image:functional-864151 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 5954bc6589f
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-864151
--> bc432310ca4
Successfully tagged localhost/my-image:functional-864151
bc432310ca41e92ce315ff7b417e60e1a011341c899d9c2a3dbace9c80e01966
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-864151 image build -t localhost/my-image:functional-864151 testdata/build --alsologtostderr:
I0908 13:50:03.508010 1130107 out.go:360] Setting OutFile to fd 1 ...
I0908 13:50:03.508355 1130107 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 13:50:03.508368 1130107 out.go:374] Setting ErrFile to fd 2...
I0908 13:50:03.508373 1130107 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 13:50:03.508623 1130107 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-1116714/.minikube/bin
I0908 13:50:03.509325 1130107 config.go:182] Loaded profile config "functional-864151": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 13:50:03.510255 1130107 config.go:182] Loaded profile config "functional-864151": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 13:50:03.510713 1130107 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
I0908 13:50:03.510771 1130107 main.go:141] libmachine: Launching plugin server for driver kvm2
I0908 13:50:03.527829 1130107 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38743
I0908 13:50:03.528503 1130107 main.go:141] libmachine: () Calling .GetVersion
I0908 13:50:03.529178 1130107 main.go:141] libmachine: Using API Version  1
I0908 13:50:03.529202 1130107 main.go:141] libmachine: () Calling .SetConfigRaw
I0908 13:50:03.529616 1130107 main.go:141] libmachine: () Calling .GetMachineName
I0908 13:50:03.529890 1130107 main.go:141] libmachine: (functional-864151) Calling .GetState
I0908 13:50:03.532250 1130107 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
I0908 13:50:03.532343 1130107 main.go:141] libmachine: Launching plugin server for driver kvm2
I0908 13:50:03.549653 1130107 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33847
I0908 13:50:03.550223 1130107 main.go:141] libmachine: () Calling .GetVersion
I0908 13:50:03.550879 1130107 main.go:141] libmachine: Using API Version  1
I0908 13:50:03.550930 1130107 main.go:141] libmachine: () Calling .SetConfigRaw
I0908 13:50:03.551512 1130107 main.go:141] libmachine: () Calling .GetMachineName
I0908 13:50:03.551910 1130107 main.go:141] libmachine: (functional-864151) Calling .DriverName
I0908 13:50:03.552178 1130107 ssh_runner.go:195] Run: systemctl --version
I0908 13:50:03.552222 1130107 main.go:141] libmachine: (functional-864151) Calling .GetSSHHostname
I0908 13:50:03.555572 1130107 main.go:141] libmachine: (functional-864151) DBG | domain functional-864151 has defined MAC address 52:54:00:9a:17:49 in network mk-functional-864151
I0908 13:50:03.556127 1130107 main.go:141] libmachine: (functional-864151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:17:49", ip: ""} in network mk-functional-864151: {Iface:virbr1 ExpiryTime:2025-09-08 14:46:34 +0000 UTC Type:0 Mac:52:54:00:9a:17:49 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:functional-864151 Clientid:01:52:54:00:9a:17:49}
I0908 13:50:03.556170 1130107 main.go:141] libmachine: (functional-864151) DBG | domain functional-864151 has defined IP address 192.168.39.136 and MAC address 52:54:00:9a:17:49 in network mk-functional-864151
I0908 13:50:03.556338 1130107 main.go:141] libmachine: (functional-864151) Calling .GetSSHPort
I0908 13:50:03.556555 1130107 main.go:141] libmachine: (functional-864151) Calling .GetSSHKeyPath
I0908 13:50:03.556746 1130107 main.go:141] libmachine: (functional-864151) Calling .GetSSHUsername
I0908 13:50:03.556977 1130107 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/functional-864151/id_rsa Username:docker}
I0908 13:50:03.651124 1130107 build_images.go:161] Building image from path: /tmp/build.3698394552.tar
I0908 13:50:03.651222 1130107 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0908 13:50:03.666945 1130107 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3698394552.tar
I0908 13:50:03.676543 1130107 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3698394552.tar: stat -c "%s %y" /var/lib/minikube/build/build.3698394552.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3698394552.tar': No such file or directory
I0908 13:50:03.676586 1130107 ssh_runner.go:362] scp /tmp/build.3698394552.tar --> /var/lib/minikube/build/build.3698394552.tar (3072 bytes)
I0908 13:50:03.753127 1130107 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3698394552
I0908 13:50:03.772842 1130107 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3698394552 -xf /var/lib/minikube/build/build.3698394552.tar
I0908 13:50:03.796787 1130107 crio.go:315] Building image: /var/lib/minikube/build/build.3698394552
I0908 13:50:03.796908 1130107 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-864151 /var/lib/minikube/build/build.3698394552 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0908 13:50:07.008178 1130107 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-864151 /var/lib/minikube/build/build.3698394552 --cgroup-manager=cgroupfs: (3.21123276s)
I0908 13:50:07.008272 1130107 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3698394552
I0908 13:50:07.041336 1130107 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3698394552.tar
I0908 13:50:07.060350 1130107 build_images.go:217] Built localhost/my-image:functional-864151 from /tmp/build.3698394552.tar
I0908 13:50:07.060395 1130107 build_images.go:133] succeeded building to: functional-864151
I0908 13:50:07.060400 1130107 build_images.go:134] failed building to: 
I0908 13:50:07.060440 1130107 main.go:141] libmachine: Making call to close driver server
I0908 13:50:07.060459 1130107 main.go:141] libmachine: (functional-864151) Calling .Close
I0908 13:50:07.060789 1130107 main.go:141] libmachine: Successfully made call to close driver server
I0908 13:50:07.060810 1130107 main.go:141] libmachine: Making call to close connection to plugin binary
I0908 13:50:07.060819 1130107 main.go:141] libmachine: Making call to close driver server
I0908 13:50:07.060826 1130107 main.go:141] libmachine: (functional-864151) Calling .Close
I0908 13:50:07.061157 1130107 main.go:141] libmachine: (functional-864151) DBG | Closing plugin on server side
I0908 13:50:07.061208 1130107 main.go:141] libmachine: Successfully made call to close driver server
I0908 13:50:07.061219 1130107 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 image ls
functional_test.go:466: (dbg) Done: out/minikube-linux-amd64 -p functional-864151 image ls: (2.291842937s)
functional_test.go:461: expected "localhost/my-image:functional-864151" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (6.13s)

                                                
                                    
x
+
TestPreload (170.42s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-981850 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-981850 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0: (1m45.109772792s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-981850 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-981850 image pull gcr.io/k8s-minikube/busybox: (2.698541172s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-981850
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-981850: (7.33937925s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-981850 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-981850 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (51.909641149s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-981850 image list
preload_test.go:75: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10
	registry.k8s.io/kube-scheduler:v1.32.0
	registry.k8s.io/kube-proxy:v1.32.0
	registry.k8s.io/kube-controller-manager:v1.32.0
	registry.k8s.io/kube-apiserver:v1.32.0
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20241108-5c6d2daf

                                                
                                                
-- /stdout --
panic.go:636: *** TestPreload FAILED at 2025-09-08 14:38:28.959985555 +0000 UTC m=+3801.189534377
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-981850 -n test-preload-981850
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-981850 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p test-preload-981850 logs -n 25: (1.240744611s)
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-546632 ssh -n multinode-546632-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-546632     │ jenkins │ v1.36.0 │ 08 Sep 25 14:22 UTC │ 08 Sep 25 14:22 UTC │
	│ ssh     │ multinode-546632 ssh -n multinode-546632 sudo cat /home/docker/cp-test_multinode-546632-m03_multinode-546632.txt                                          │ multinode-546632     │ jenkins │ v1.36.0 │ 08 Sep 25 14:22 UTC │ 08 Sep 25 14:22 UTC │
	│ cp      │ multinode-546632 cp multinode-546632-m03:/home/docker/cp-test.txt multinode-546632-m02:/home/docker/cp-test_multinode-546632-m03_multinode-546632-m02.txt │ multinode-546632     │ jenkins │ v1.36.0 │ 08 Sep 25 14:22 UTC │ 08 Sep 25 14:22 UTC │
	│ ssh     │ multinode-546632 ssh -n multinode-546632-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-546632     │ jenkins │ v1.36.0 │ 08 Sep 25 14:22 UTC │ 08 Sep 25 14:22 UTC │
	│ ssh     │ multinode-546632 ssh -n multinode-546632-m02 sudo cat /home/docker/cp-test_multinode-546632-m03_multinode-546632-m02.txt                                  │ multinode-546632     │ jenkins │ v1.36.0 │ 08 Sep 25 14:22 UTC │ 08 Sep 25 14:22 UTC │
	│ node    │ multinode-546632 node stop m03                                                                                                                            │ multinode-546632     │ jenkins │ v1.36.0 │ 08 Sep 25 14:22 UTC │ 08 Sep 25 14:23 UTC │
	│ node    │ multinode-546632 node start m03 -v=5 --alsologtostderr                                                                                                    │ multinode-546632     │ jenkins │ v1.36.0 │ 08 Sep 25 14:23 UTC │ 08 Sep 25 14:23 UTC │
	│ node    │ list -p multinode-546632                                                                                                                                  │ multinode-546632     │ jenkins │ v1.36.0 │ 08 Sep 25 14:23 UTC │                     │
	│ stop    │ -p multinode-546632                                                                                                                                       │ multinode-546632     │ jenkins │ v1.36.0 │ 08 Sep 25 14:23 UTC │ 08 Sep 25 14:26 UTC │
	│ start   │ -p multinode-546632 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-546632     │ jenkins │ v1.36.0 │ 08 Sep 25 14:26 UTC │ 08 Sep 25 14:29 UTC │
	│ node    │ list -p multinode-546632                                                                                                                                  │ multinode-546632     │ jenkins │ v1.36.0 │ 08 Sep 25 14:29 UTC │                     │
	│ node    │ multinode-546632 node delete m03                                                                                                                          │ multinode-546632     │ jenkins │ v1.36.0 │ 08 Sep 25 14:29 UTC │ 08 Sep 25 14:29 UTC │
	│ stop    │ multinode-546632 stop                                                                                                                                     │ multinode-546632     │ jenkins │ v1.36.0 │ 08 Sep 25 14:29 UTC │ 08 Sep 25 14:32 UTC │
	│ start   │ -p multinode-546632 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio                                                            │ multinode-546632     │ jenkins │ v1.36.0 │ 08 Sep 25 14:32 UTC │ 08 Sep 25 14:34 UTC │
	│ node    │ list -p multinode-546632                                                                                                                                  │ multinode-546632     │ jenkins │ v1.36.0 │ 08 Sep 25 14:34 UTC │                     │
	│ start   │ -p multinode-546632-m02 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-546632-m02 │ jenkins │ v1.36.0 │ 08 Sep 25 14:34 UTC │                     │
	│ start   │ -p multinode-546632-m03 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-546632-m03 │ jenkins │ v1.36.0 │ 08 Sep 25 14:34 UTC │ 08 Sep 25 14:35 UTC │
	│ node    │ add -p multinode-546632                                                                                                                                   │ multinode-546632     │ jenkins │ v1.36.0 │ 08 Sep 25 14:35 UTC │                     │
	│ delete  │ -p multinode-546632-m03                                                                                                                                   │ multinode-546632-m03 │ jenkins │ v1.36.0 │ 08 Sep 25 14:35 UTC │ 08 Sep 25 14:35 UTC │
	│ delete  │ -p multinode-546632                                                                                                                                       │ multinode-546632     │ jenkins │ v1.36.0 │ 08 Sep 25 14:35 UTC │ 08 Sep 25 14:35 UTC │
	│ start   │ -p test-preload-981850 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0   │ test-preload-981850  │ jenkins │ v1.36.0 │ 08 Sep 25 14:35 UTC │ 08 Sep 25 14:37 UTC │
	│ image   │ test-preload-981850 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-981850  │ jenkins │ v1.36.0 │ 08 Sep 25 14:37 UTC │ 08 Sep 25 14:37 UTC │
	│ stop    │ -p test-preload-981850                                                                                                                                    │ test-preload-981850  │ jenkins │ v1.36.0 │ 08 Sep 25 14:37 UTC │ 08 Sep 25 14:37 UTC │
	│ start   │ -p test-preload-981850 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio                                           │ test-preload-981850  │ jenkins │ v1.36.0 │ 08 Sep 25 14:37 UTC │ 08 Sep 25 14:38 UTC │
	│ image   │ test-preload-981850 image list                                                                                                                            │ test-preload-981850  │ jenkins │ v1.36.0 │ 08 Sep 25 14:38 UTC │ 08 Sep 25 14:38 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 14:37:36
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 14:37:36.849107 1152426 out.go:360] Setting OutFile to fd 1 ...
	I0908 14:37:36.849260 1152426 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 14:37:36.849266 1152426 out.go:374] Setting ErrFile to fd 2...
	I0908 14:37:36.849271 1152426 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 14:37:36.849499 1152426 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-1116714/.minikube/bin
	I0908 14:37:36.850185 1152426 out.go:368] Setting JSON to false
	I0908 14:37:36.851222 1152426 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":19201,"bootTime":1757323056,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 14:37:36.851353 1152426 start.go:140] virtualization: kvm guest
	I0908 14:37:36.853927 1152426 out.go:179] * [test-preload-981850] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0908 14:37:36.855473 1152426 notify.go:220] Checking for updates...
	I0908 14:37:36.855519 1152426 out.go:179]   - MINIKUBE_LOCATION=21508
	I0908 14:37:36.856965 1152426 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 14:37:36.858469 1152426 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21508-1116714/kubeconfig
	I0908 14:37:36.859757 1152426 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-1116714/.minikube
	I0908 14:37:36.861375 1152426 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 14:37:36.863077 1152426 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 14:37:36.865265 1152426 config.go:182] Loaded profile config "test-preload-981850": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0908 14:37:36.865693 1152426 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 14:37:36.865792 1152426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 14:37:36.883370 1152426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42133
	I0908 14:37:36.884042 1152426 main.go:141] libmachine: () Calling .GetVersion
	I0908 14:37:36.884714 1152426 main.go:141] libmachine: Using API Version  1
	I0908 14:37:36.884764 1152426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 14:37:36.885217 1152426 main.go:141] libmachine: () Calling .GetMachineName
	I0908 14:37:36.885457 1152426 main.go:141] libmachine: (test-preload-981850) Calling .DriverName
	I0908 14:37:36.887752 1152426 out.go:179] * Kubernetes 1.34.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.0
	I0908 14:37:36.889223 1152426 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 14:37:36.889594 1152426 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 14:37:36.889643 1152426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 14:37:36.905814 1152426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41035
	I0908 14:37:36.906342 1152426 main.go:141] libmachine: () Calling .GetVersion
	I0908 14:37:36.906797 1152426 main.go:141] libmachine: Using API Version  1
	I0908 14:37:36.906816 1152426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 14:37:36.907203 1152426 main.go:141] libmachine: () Calling .GetMachineName
	I0908 14:37:36.907452 1152426 main.go:141] libmachine: (test-preload-981850) Calling .DriverName
	I0908 14:37:36.947840 1152426 out.go:179] * Using the kvm2 driver based on existing profile
	I0908 14:37:36.949042 1152426 start.go:304] selected driver: kvm2
	I0908 14:37:36.949059 1152426 start.go:918] validating driver "kvm2" against &{Name:test-preload-981850 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.32.0 ClusterName:test-preload-981850 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.184 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 14:37:36.949229 1152426 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 14:37:36.950236 1152426 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 14:37:36.950325 1152426 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21508-1116714/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0908 14:37:36.967584 1152426 install.go:137] /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0908 14:37:36.968048 1152426 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 14:37:36.968087 1152426 cni.go:84] Creating CNI manager for ""
	I0908 14:37:36.968145 1152426 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0908 14:37:36.968207 1152426 start.go:348] cluster config:
	{Name:test-preload-981850 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-981850 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.184 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 14:37:36.968320 1152426 iso.go:125] acquiring lock: {Name:mk347390bf24761f2c39bf4cd5b718f157a50faa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 14:37:36.971102 1152426 out.go:179] * Starting "test-preload-981850" primary control-plane node in "test-preload-981850" cluster
	I0908 14:37:36.972451 1152426 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0908 14:37:36.997182 1152426 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I0908 14:37:36.997221 1152426 cache.go:58] Caching tarball of preloaded images
	I0908 14:37:36.997556 1152426 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0908 14:37:36.999637 1152426 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I0908 14:37:37.001453 1152426 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 ...
	I0908 14:37:37.026329 1152426 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I0908 14:37:40.039198 1152426 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 ...
	I0908 14:37:40.039309 1152426 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 ...
	I0908 14:37:40.824000 1152426 cache.go:61] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I0908 14:37:40.824171 1152426 profile.go:143] Saving config to /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/test-preload-981850/config.json ...
	I0908 14:37:40.824434 1152426 start.go:360] acquireMachinesLock for test-preload-981850: {Name:mk0626ae9b324aeb819357e3de70b05b9e4c30a5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0908 14:37:40.824512 1152426 start.go:364] duration metric: took 51.648µs to acquireMachinesLock for "test-preload-981850"
	I0908 14:37:40.824536 1152426 start.go:96] Skipping create...Using existing machine configuration
	I0908 14:37:40.824546 1152426 fix.go:54] fixHost starting: 
	I0908 14:37:40.824807 1152426 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 14:37:40.824855 1152426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 14:37:40.841417 1152426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39475
	I0908 14:37:40.841945 1152426 main.go:141] libmachine: () Calling .GetVersion
	I0908 14:37:40.842535 1152426 main.go:141] libmachine: Using API Version  1
	I0908 14:37:40.842570 1152426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 14:37:40.842984 1152426 main.go:141] libmachine: () Calling .GetMachineName
	I0908 14:37:40.843272 1152426 main.go:141] libmachine: (test-preload-981850) Calling .DriverName
	I0908 14:37:40.843465 1152426 main.go:141] libmachine: (test-preload-981850) Calling .GetState
	I0908 14:37:40.845500 1152426 fix.go:112] recreateIfNeeded on test-preload-981850: state=Stopped err=<nil>
	I0908 14:37:40.845541 1152426 main.go:141] libmachine: (test-preload-981850) Calling .DriverName
	W0908 14:37:40.845754 1152426 fix.go:138] unexpected machine state, will restart: <nil>
	I0908 14:37:40.847948 1152426 out.go:252] * Restarting existing kvm2 VM for "test-preload-981850" ...
	I0908 14:37:40.847992 1152426 main.go:141] libmachine: (test-preload-981850) Calling .Start
	I0908 14:37:40.848245 1152426 main.go:141] libmachine: (test-preload-981850) starting domain...
	I0908 14:37:40.848269 1152426 main.go:141] libmachine: (test-preload-981850) ensuring networks are active...
	I0908 14:37:40.849268 1152426 main.go:141] libmachine: (test-preload-981850) Ensuring network default is active
	I0908 14:37:40.849684 1152426 main.go:141] libmachine: (test-preload-981850) Ensuring network mk-test-preload-981850 is active
	I0908 14:37:40.850124 1152426 main.go:141] libmachine: (test-preload-981850) getting domain XML...
	I0908 14:37:40.851334 1152426 main.go:141] libmachine: (test-preload-981850) creating domain...
	I0908 14:37:41.231995 1152426 main.go:141] libmachine: (test-preload-981850) waiting for IP...
	I0908 14:37:41.232920 1152426 main.go:141] libmachine: (test-preload-981850) DBG | domain test-preload-981850 has defined MAC address 52:54:00:36:49:cb in network mk-test-preload-981850
	I0908 14:37:41.233410 1152426 main.go:141] libmachine: (test-preload-981850) DBG | unable to find current IP address of domain test-preload-981850 in network mk-test-preload-981850
	I0908 14:37:41.233466 1152426 main.go:141] libmachine: (test-preload-981850) DBG | I0908 14:37:41.233378 1152478 retry.go:31] will retry after 270.077541ms: waiting for domain to come up
	I0908 14:37:41.505080 1152426 main.go:141] libmachine: (test-preload-981850) DBG | domain test-preload-981850 has defined MAC address 52:54:00:36:49:cb in network mk-test-preload-981850
	I0908 14:37:41.505720 1152426 main.go:141] libmachine: (test-preload-981850) DBG | unable to find current IP address of domain test-preload-981850 in network mk-test-preload-981850
	I0908 14:37:41.505752 1152426 main.go:141] libmachine: (test-preload-981850) DBG | I0908 14:37:41.505642 1152478 retry.go:31] will retry after 388.712056ms: waiting for domain to come up
	I0908 14:37:41.896685 1152426 main.go:141] libmachine: (test-preload-981850) DBG | domain test-preload-981850 has defined MAC address 52:54:00:36:49:cb in network mk-test-preload-981850
	I0908 14:37:41.897277 1152426 main.go:141] libmachine: (test-preload-981850) DBG | unable to find current IP address of domain test-preload-981850 in network mk-test-preload-981850
	I0908 14:37:41.897376 1152426 main.go:141] libmachine: (test-preload-981850) DBG | I0908 14:37:41.897269 1152478 retry.go:31] will retry after 464.164093ms: waiting for domain to come up
	I0908 14:37:42.363315 1152426 main.go:141] libmachine: (test-preload-981850) DBG | domain test-preload-981850 has defined MAC address 52:54:00:36:49:cb in network mk-test-preload-981850
	I0908 14:37:42.363754 1152426 main.go:141] libmachine: (test-preload-981850) DBG | unable to find current IP address of domain test-preload-981850 in network mk-test-preload-981850
	I0908 14:37:42.363777 1152426 main.go:141] libmachine: (test-preload-981850) DBG | I0908 14:37:42.363726 1152478 retry.go:31] will retry after 601.866344ms: waiting for domain to come up
	I0908 14:37:42.967884 1152426 main.go:141] libmachine: (test-preload-981850) DBG | domain test-preload-981850 has defined MAC address 52:54:00:36:49:cb in network mk-test-preload-981850
	I0908 14:37:42.968313 1152426 main.go:141] libmachine: (test-preload-981850) DBG | unable to find current IP address of domain test-preload-981850 in network mk-test-preload-981850
	I0908 14:37:42.968375 1152426 main.go:141] libmachine: (test-preload-981850) DBG | I0908 14:37:42.968283 1152478 retry.go:31] will retry after 736.339394ms: waiting for domain to come up
	I0908 14:37:43.706323 1152426 main.go:141] libmachine: (test-preload-981850) DBG | domain test-preload-981850 has defined MAC address 52:54:00:36:49:cb in network mk-test-preload-981850
	I0908 14:37:43.706842 1152426 main.go:141] libmachine: (test-preload-981850) DBG | unable to find current IP address of domain test-preload-981850 in network mk-test-preload-981850
	I0908 14:37:43.706885 1152426 main.go:141] libmachine: (test-preload-981850) DBG | I0908 14:37:43.706784 1152478 retry.go:31] will retry after 816.777008ms: waiting for domain to come up
	I0908 14:37:44.525179 1152426 main.go:141] libmachine: (test-preload-981850) DBG | domain test-preload-981850 has defined MAC address 52:54:00:36:49:cb in network mk-test-preload-981850
	I0908 14:37:44.525641 1152426 main.go:141] libmachine: (test-preload-981850) DBG | unable to find current IP address of domain test-preload-981850 in network mk-test-preload-981850
	I0908 14:37:44.525677 1152426 main.go:141] libmachine: (test-preload-981850) DBG | I0908 14:37:44.525608 1152478 retry.go:31] will retry after 981.735899ms: waiting for domain to come up
	I0908 14:37:45.508845 1152426 main.go:141] libmachine: (test-preload-981850) DBG | domain test-preload-981850 has defined MAC address 52:54:00:36:49:cb in network mk-test-preload-981850
	I0908 14:37:45.509344 1152426 main.go:141] libmachine: (test-preload-981850) DBG | unable to find current IP address of domain test-preload-981850 in network mk-test-preload-981850
	I0908 14:37:45.509386 1152426 main.go:141] libmachine: (test-preload-981850) DBG | I0908 14:37:45.509319 1152478 retry.go:31] will retry after 1.153723123s: waiting for domain to come up
	I0908 14:37:46.664536 1152426 main.go:141] libmachine: (test-preload-981850) DBG | domain test-preload-981850 has defined MAC address 52:54:00:36:49:cb in network mk-test-preload-981850
	I0908 14:37:46.665150 1152426 main.go:141] libmachine: (test-preload-981850) DBG | unable to find current IP address of domain test-preload-981850 in network mk-test-preload-981850
	I0908 14:37:46.665182 1152426 main.go:141] libmachine: (test-preload-981850) DBG | I0908 14:37:46.665082 1152478 retry.go:31] will retry after 1.737069137s: waiting for domain to come up
	I0908 14:37:48.405246 1152426 main.go:141] libmachine: (test-preload-981850) DBG | domain test-preload-981850 has defined MAC address 52:54:00:36:49:cb in network mk-test-preload-981850
	I0908 14:37:48.405701 1152426 main.go:141] libmachine: (test-preload-981850) DBG | unable to find current IP address of domain test-preload-981850 in network mk-test-preload-981850
	I0908 14:37:48.405723 1152426 main.go:141] libmachine: (test-preload-981850) DBG | I0908 14:37:48.405666 1152478 retry.go:31] will retry after 1.978322862s: waiting for domain to come up
	I0908 14:37:50.386859 1152426 main.go:141] libmachine: (test-preload-981850) DBG | domain test-preload-981850 has defined MAC address 52:54:00:36:49:cb in network mk-test-preload-981850
	I0908 14:37:50.387419 1152426 main.go:141] libmachine: (test-preload-981850) DBG | unable to find current IP address of domain test-preload-981850 in network mk-test-preload-981850
	I0908 14:37:50.387455 1152426 main.go:141] libmachine: (test-preload-981850) DBG | I0908 14:37:50.387393 1152478 retry.go:31] will retry after 2.071299357s: waiting for domain to come up
	I0908 14:37:52.461374 1152426 main.go:141] libmachine: (test-preload-981850) DBG | domain test-preload-981850 has defined MAC address 52:54:00:36:49:cb in network mk-test-preload-981850
	I0908 14:37:52.461782 1152426 main.go:141] libmachine: (test-preload-981850) DBG | unable to find current IP address of domain test-preload-981850 in network mk-test-preload-981850
	I0908 14:37:52.461833 1152426 main.go:141] libmachine: (test-preload-981850) DBG | I0908 14:37:52.461749 1152478 retry.go:31] will retry after 3.154632809s: waiting for domain to come up
	I0908 14:37:55.620352 1152426 main.go:141] libmachine: (test-preload-981850) DBG | domain test-preload-981850 has defined MAC address 52:54:00:36:49:cb in network mk-test-preload-981850
	I0908 14:37:55.620891 1152426 main.go:141] libmachine: (test-preload-981850) DBG | unable to find current IP address of domain test-preload-981850 in network mk-test-preload-981850
	I0908 14:37:55.620922 1152426 main.go:141] libmachine: (test-preload-981850) DBG | I0908 14:37:55.620855 1152478 retry.go:31] will retry after 3.709006815s: waiting for domain to come up
	I0908 14:37:59.334676 1152426 main.go:141] libmachine: (test-preload-981850) DBG | domain test-preload-981850 has defined MAC address 52:54:00:36:49:cb in network mk-test-preload-981850
	I0908 14:37:59.335361 1152426 main.go:141] libmachine: (test-preload-981850) found domain IP: 192.168.39.184
	I0908 14:37:59.335388 1152426 main.go:141] libmachine: (test-preload-981850) reserving static IP address...
	I0908 14:37:59.335402 1152426 main.go:141] libmachine: (test-preload-981850) DBG | domain test-preload-981850 has current primary IP address 192.168.39.184 and MAC address 52:54:00:36:49:cb in network mk-test-preload-981850
	I0908 14:37:59.335931 1152426 main.go:141] libmachine: (test-preload-981850) reserved static IP address 192.168.39.184 for domain test-preload-981850
	I0908 14:37:59.335969 1152426 main.go:141] libmachine: (test-preload-981850) DBG | found host DHCP lease matching {name: "test-preload-981850", mac: "52:54:00:36:49:cb", ip: "192.168.39.184"} in network mk-test-preload-981850: {Iface:virbr1 ExpiryTime:2025-09-08 15:37:52 +0000 UTC Type:0 Mac:52:54:00:36:49:cb Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:test-preload-981850 Clientid:01:52:54:00:36:49:cb}
	I0908 14:37:59.335986 1152426 main.go:141] libmachine: (test-preload-981850) waiting for SSH...
	I0908 14:37:59.336045 1152426 main.go:141] libmachine: (test-preload-981850) DBG | skip adding static IP to network mk-test-preload-981850 - found existing host DHCP lease matching {name: "test-preload-981850", mac: "52:54:00:36:49:cb", ip: "192.168.39.184"}
	I0908 14:37:59.336065 1152426 main.go:141] libmachine: (test-preload-981850) DBG | Getting to WaitForSSH function...
	I0908 14:37:59.338138 1152426 main.go:141] libmachine: (test-preload-981850) DBG | domain test-preload-981850 has defined MAC address 52:54:00:36:49:cb in network mk-test-preload-981850
	I0908 14:37:59.338464 1152426 main.go:141] libmachine: (test-preload-981850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:49:cb", ip: ""} in network mk-test-preload-981850: {Iface:virbr1 ExpiryTime:2025-09-08 15:37:52 +0000 UTC Type:0 Mac:52:54:00:36:49:cb Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:test-preload-981850 Clientid:01:52:54:00:36:49:cb}
	I0908 14:37:59.338496 1152426 main.go:141] libmachine: (test-preload-981850) DBG | domain test-preload-981850 has defined IP address 192.168.39.184 and MAC address 52:54:00:36:49:cb in network mk-test-preload-981850
	I0908 14:37:59.338617 1152426 main.go:141] libmachine: (test-preload-981850) DBG | Using SSH client type: external
	I0908 14:37:59.338638 1152426 main.go:141] libmachine: (test-preload-981850) DBG | Using SSH private key: /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/test-preload-981850/id_rsa (-rw-------)
	I0908 14:37:59.338668 1152426 main.go:141] libmachine: (test-preload-981850) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.184 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/test-preload-981850/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0908 14:37:59.338685 1152426 main.go:141] libmachine: (test-preload-981850) DBG | About to run SSH command:
	I0908 14:37:59.338702 1152426 main.go:141] libmachine: (test-preload-981850) DBG | exit 0
	I0908 14:37:59.464408 1152426 main.go:141] libmachine: (test-preload-981850) DBG | SSH cmd err, output: <nil>: 
	I0908 14:37:59.464787 1152426 main.go:141] libmachine: (test-preload-981850) Calling .GetConfigRaw
	I0908 14:37:59.465570 1152426 main.go:141] libmachine: (test-preload-981850) Calling .GetIP
	I0908 14:37:59.468200 1152426 main.go:141] libmachine: (test-preload-981850) DBG | domain test-preload-981850 has defined MAC address 52:54:00:36:49:cb in network mk-test-preload-981850
	I0908 14:37:59.468550 1152426 main.go:141] libmachine: (test-preload-981850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:49:cb", ip: ""} in network mk-test-preload-981850: {Iface:virbr1 ExpiryTime:2025-09-08 15:37:52 +0000 UTC Type:0 Mac:52:54:00:36:49:cb Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:test-preload-981850 Clientid:01:52:54:00:36:49:cb}
	I0908 14:37:59.468587 1152426 main.go:141] libmachine: (test-preload-981850) DBG | domain test-preload-981850 has defined IP address 192.168.39.184 and MAC address 52:54:00:36:49:cb in network mk-test-preload-981850
	I0908 14:37:59.468897 1152426 profile.go:143] Saving config to /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/test-preload-981850/config.json ...
	I0908 14:37:59.469148 1152426 machine.go:93] provisionDockerMachine start ...
	I0908 14:37:59.469172 1152426 main.go:141] libmachine: (test-preload-981850) Calling .DriverName
	I0908 14:37:59.469444 1152426 main.go:141] libmachine: (test-preload-981850) Calling .GetSSHHostname
	I0908 14:37:59.471924 1152426 main.go:141] libmachine: (test-preload-981850) DBG | domain test-preload-981850 has defined MAC address 52:54:00:36:49:cb in network mk-test-preload-981850
	I0908 14:37:59.472256 1152426 main.go:141] libmachine: (test-preload-981850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:49:cb", ip: ""} in network mk-test-preload-981850: {Iface:virbr1 ExpiryTime:2025-09-08 15:37:52 +0000 UTC Type:0 Mac:52:54:00:36:49:cb Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:test-preload-981850 Clientid:01:52:54:00:36:49:cb}
	I0908 14:37:59.472305 1152426 main.go:141] libmachine: (test-preload-981850) DBG | domain test-preload-981850 has defined IP address 192.168.39.184 and MAC address 52:54:00:36:49:cb in network mk-test-preload-981850
	I0908 14:37:59.472431 1152426 main.go:141] libmachine: (test-preload-981850) Calling .GetSSHPort
	I0908 14:37:59.472657 1152426 main.go:141] libmachine: (test-preload-981850) Calling .GetSSHKeyPath
	I0908 14:37:59.472842 1152426 main.go:141] libmachine: (test-preload-981850) Calling .GetSSHKeyPath
	I0908 14:37:59.473000 1152426 main.go:141] libmachine: (test-preload-981850) Calling .GetSSHUsername
	I0908 14:37:59.473164 1152426 main.go:141] libmachine: Using SSH client type: native
	I0908 14:37:59.473493 1152426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.184 22 <nil> <nil>}
	I0908 14:37:59.473508 1152426 main.go:141] libmachine: About to run SSH command:
	hostname
	I0908 14:37:59.584732 1152426 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0908 14:37:59.584763 1152426 main.go:141] libmachine: (test-preload-981850) Calling .GetMachineName
	I0908 14:37:59.585077 1152426 buildroot.go:166] provisioning hostname "test-preload-981850"
	I0908 14:37:59.585107 1152426 main.go:141] libmachine: (test-preload-981850) Calling .GetMachineName
	I0908 14:37:59.585330 1152426 main.go:141] libmachine: (test-preload-981850) Calling .GetSSHHostname
	I0908 14:37:59.588456 1152426 main.go:141] libmachine: (test-preload-981850) DBG | domain test-preload-981850 has defined MAC address 52:54:00:36:49:cb in network mk-test-preload-981850
	I0908 14:37:59.588981 1152426 main.go:141] libmachine: (test-preload-981850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:49:cb", ip: ""} in network mk-test-preload-981850: {Iface:virbr1 ExpiryTime:2025-09-08 15:37:52 +0000 UTC Type:0 Mac:52:54:00:36:49:cb Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:test-preload-981850 Clientid:01:52:54:00:36:49:cb}
	I0908 14:37:59.589055 1152426 main.go:141] libmachine: (test-preload-981850) DBG | domain test-preload-981850 has defined IP address 192.168.39.184 and MAC address 52:54:00:36:49:cb in network mk-test-preload-981850
	I0908 14:37:59.589147 1152426 main.go:141] libmachine: (test-preload-981850) Calling .GetSSHPort
	I0908 14:37:59.589380 1152426 main.go:141] libmachine: (test-preload-981850) Calling .GetSSHKeyPath
	I0908 14:37:59.589572 1152426 main.go:141] libmachine: (test-preload-981850) Calling .GetSSHKeyPath
	I0908 14:37:59.589749 1152426 main.go:141] libmachine: (test-preload-981850) Calling .GetSSHUsername
	I0908 14:37:59.589961 1152426 main.go:141] libmachine: Using SSH client type: native
	I0908 14:37:59.590357 1152426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.184 22 <nil> <nil>}
	I0908 14:37:59.590385 1152426 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-981850 && echo "test-preload-981850" | sudo tee /etc/hostname
	I0908 14:37:59.718799 1152426 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-981850
	
	I0908 14:37:59.718842 1152426 main.go:141] libmachine: (test-preload-981850) Calling .GetSSHHostname
	I0908 14:37:59.721900 1152426 main.go:141] libmachine: (test-preload-981850) DBG | domain test-preload-981850 has defined MAC address 52:54:00:36:49:cb in network mk-test-preload-981850
	I0908 14:37:59.722287 1152426 main.go:141] libmachine: (test-preload-981850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:49:cb", ip: ""} in network mk-test-preload-981850: {Iface:virbr1 ExpiryTime:2025-09-08 15:37:52 +0000 UTC Type:0 Mac:52:54:00:36:49:cb Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:test-preload-981850 Clientid:01:52:54:00:36:49:cb}
	I0908 14:37:59.722324 1152426 main.go:141] libmachine: (test-preload-981850) DBG | domain test-preload-981850 has defined IP address 192.168.39.184 and MAC address 52:54:00:36:49:cb in network mk-test-preload-981850
	I0908 14:37:59.722608 1152426 main.go:141] libmachine: (test-preload-981850) Calling .GetSSHPort
	I0908 14:37:59.722825 1152426 main.go:141] libmachine: (test-preload-981850) Calling .GetSSHKeyPath
	I0908 14:37:59.722996 1152426 main.go:141] libmachine: (test-preload-981850) Calling .GetSSHKeyPath
	I0908 14:37:59.723142 1152426 main.go:141] libmachine: (test-preload-981850) Calling .GetSSHUsername
	I0908 14:37:59.723296 1152426 main.go:141] libmachine: Using SSH client type: native
	I0908 14:37:59.723519 1152426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.184 22 <nil> <nil>}
	I0908 14:37:59.723535 1152426 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-981850' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-981850/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-981850' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 14:37:59.849046 1152426 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 14:37:59.849087 1152426 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21508-1116714/.minikube CaCertPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21508-1116714/.minikube}
	I0908 14:37:59.849138 1152426 buildroot.go:174] setting up certificates
	I0908 14:37:59.849151 1152426 provision.go:84] configureAuth start
	I0908 14:37:59.849166 1152426 main.go:141] libmachine: (test-preload-981850) Calling .GetMachineName
	I0908 14:37:59.849581 1152426 main.go:141] libmachine: (test-preload-981850) Calling .GetIP
	I0908 14:37:59.852831 1152426 main.go:141] libmachine: (test-preload-981850) DBG | domain test-preload-981850 has defined MAC address 52:54:00:36:49:cb in network mk-test-preload-981850
	I0908 14:37:59.853264 1152426 main.go:141] libmachine: (test-preload-981850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:49:cb", ip: ""} in network mk-test-preload-981850: {Iface:virbr1 ExpiryTime:2025-09-08 15:37:52 +0000 UTC Type:0 Mac:52:54:00:36:49:cb Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:test-preload-981850 Clientid:01:52:54:00:36:49:cb}
	I0908 14:37:59.853316 1152426 main.go:141] libmachine: (test-preload-981850) DBG | domain test-preload-981850 has defined IP address 192.168.39.184 and MAC address 52:54:00:36:49:cb in network mk-test-preload-981850
	I0908 14:37:59.853474 1152426 main.go:141] libmachine: (test-preload-981850) Calling .GetSSHHostname
	I0908 14:37:59.855963 1152426 main.go:141] libmachine: (test-preload-981850) DBG | domain test-preload-981850 has defined MAC address 52:54:00:36:49:cb in network mk-test-preload-981850
	I0908 14:37:59.856350 1152426 main.go:141] libmachine: (test-preload-981850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:49:cb", ip: ""} in network mk-test-preload-981850: {Iface:virbr1 ExpiryTime:2025-09-08 15:37:52 +0000 UTC Type:0 Mac:52:54:00:36:49:cb Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:test-preload-981850 Clientid:01:52:54:00:36:49:cb}
	I0908 14:37:59.856387 1152426 main.go:141] libmachine: (test-preload-981850) DBG | domain test-preload-981850 has defined IP address 192.168.39.184 and MAC address 52:54:00:36:49:cb in network mk-test-preload-981850
	I0908 14:37:59.856542 1152426 provision.go:143] copyHostCerts
	I0908 14:37:59.856621 1152426 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-1116714/.minikube/ca.pem, removing ...
	I0908 14:37:59.856632 1152426 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-1116714/.minikube/ca.pem
	I0908 14:37:59.856702 1152426 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21508-1116714/.minikube/ca.pem (1082 bytes)
	I0908 14:37:59.856801 1152426 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-1116714/.minikube/cert.pem, removing ...
	I0908 14:37:59.856813 1152426 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-1116714/.minikube/cert.pem
	I0908 14:37:59.856837 1152426 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21508-1116714/.minikube/cert.pem (1123 bytes)
	I0908 14:37:59.856892 1152426 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-1116714/.minikube/key.pem, removing ...
	I0908 14:37:59.856900 1152426 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-1116714/.minikube/key.pem
	I0908 14:37:59.856926 1152426 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21508-1116714/.minikube/key.pem (1675 bytes)
	I0908 14:37:59.856976 1152426 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca-key.pem org=jenkins.test-preload-981850 san=[127.0.0.1 192.168.39.184 localhost minikube test-preload-981850]
	I0908 14:38:00.125610 1152426 provision.go:177] copyRemoteCerts
	I0908 14:38:00.125679 1152426 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 14:38:00.125709 1152426 main.go:141] libmachine: (test-preload-981850) Calling .GetSSHHostname
	I0908 14:38:00.128747 1152426 main.go:141] libmachine: (test-preload-981850) DBG | domain test-preload-981850 has defined MAC address 52:54:00:36:49:cb in network mk-test-preload-981850
	I0908 14:38:00.129022 1152426 main.go:141] libmachine: (test-preload-981850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:49:cb", ip: ""} in network mk-test-preload-981850: {Iface:virbr1 ExpiryTime:2025-09-08 15:37:52 +0000 UTC Type:0 Mac:52:54:00:36:49:cb Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:test-preload-981850 Clientid:01:52:54:00:36:49:cb}
	I0908 14:38:00.129047 1152426 main.go:141] libmachine: (test-preload-981850) DBG | domain test-preload-981850 has defined IP address 192.168.39.184 and MAC address 52:54:00:36:49:cb in network mk-test-preload-981850
	I0908 14:38:00.129258 1152426 main.go:141] libmachine: (test-preload-981850) Calling .GetSSHPort
	I0908 14:38:00.129486 1152426 main.go:141] libmachine: (test-preload-981850) Calling .GetSSHKeyPath
	I0908 14:38:00.129682 1152426 main.go:141] libmachine: (test-preload-981850) Calling .GetSSHUsername
	I0908 14:38:00.129815 1152426 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/test-preload-981850/id_rsa Username:docker}
	I0908 14:38:00.217705 1152426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0908 14:38:00.252948 1152426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0908 14:38:00.287735 1152426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0908 14:38:00.321317 1152426 provision.go:87] duration metric: took 472.14269ms to configureAuth
	I0908 14:38:00.321362 1152426 buildroot.go:189] setting minikube options for container-runtime
	I0908 14:38:00.321568 1152426 config.go:182] Loaded profile config "test-preload-981850": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0908 14:38:00.321659 1152426 main.go:141] libmachine: (test-preload-981850) Calling .GetSSHHostname
	I0908 14:38:00.324406 1152426 main.go:141] libmachine: (test-preload-981850) DBG | domain test-preload-981850 has defined MAC address 52:54:00:36:49:cb in network mk-test-preload-981850
	I0908 14:38:00.324768 1152426 main.go:141] libmachine: (test-preload-981850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:49:cb", ip: ""} in network mk-test-preload-981850: {Iface:virbr1 ExpiryTime:2025-09-08 15:37:52 +0000 UTC Type:0 Mac:52:54:00:36:49:cb Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:test-preload-981850 Clientid:01:52:54:00:36:49:cb}
	I0908 14:38:00.324792 1152426 main.go:141] libmachine: (test-preload-981850) DBG | domain test-preload-981850 has defined IP address 192.168.39.184 and MAC address 52:54:00:36:49:cb in network mk-test-preload-981850
	I0908 14:38:00.324977 1152426 main.go:141] libmachine: (test-preload-981850) Calling .GetSSHPort
	I0908 14:38:00.325221 1152426 main.go:141] libmachine: (test-preload-981850) Calling .GetSSHKeyPath
	I0908 14:38:00.325421 1152426 main.go:141] libmachine: (test-preload-981850) Calling .GetSSHKeyPath
	I0908 14:38:00.325553 1152426 main.go:141] libmachine: (test-preload-981850) Calling .GetSSHUsername
	I0908 14:38:00.325744 1152426 main.go:141] libmachine: Using SSH client type: native
	I0908 14:38:00.325968 1152426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.184 22 <nil> <nil>}
	I0908 14:38:00.325988 1152426 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0908 14:38:00.590299 1152426 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0908 14:38:00.590334 1152426 machine.go:96] duration metric: took 1.121169977s to provisionDockerMachine
	I0908 14:38:00.590348 1152426 start.go:293] postStartSetup for "test-preload-981850" (driver="kvm2")
	I0908 14:38:00.590359 1152426 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 14:38:00.590377 1152426 main.go:141] libmachine: (test-preload-981850) Calling .DriverName
	I0908 14:38:00.590761 1152426 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 14:38:00.590823 1152426 main.go:141] libmachine: (test-preload-981850) Calling .GetSSHHostname
	I0908 14:38:00.594195 1152426 main.go:141] libmachine: (test-preload-981850) DBG | domain test-preload-981850 has defined MAC address 52:54:00:36:49:cb in network mk-test-preload-981850
	I0908 14:38:00.594602 1152426 main.go:141] libmachine: (test-preload-981850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:49:cb", ip: ""} in network mk-test-preload-981850: {Iface:virbr1 ExpiryTime:2025-09-08 15:37:52 +0000 UTC Type:0 Mac:52:54:00:36:49:cb Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:test-preload-981850 Clientid:01:52:54:00:36:49:cb}
	I0908 14:38:00.594633 1152426 main.go:141] libmachine: (test-preload-981850) DBG | domain test-preload-981850 has defined IP address 192.168.39.184 and MAC address 52:54:00:36:49:cb in network mk-test-preload-981850
	I0908 14:38:00.594907 1152426 main.go:141] libmachine: (test-preload-981850) Calling .GetSSHPort
	I0908 14:38:00.595153 1152426 main.go:141] libmachine: (test-preload-981850) Calling .GetSSHKeyPath
	I0908 14:38:00.595331 1152426 main.go:141] libmachine: (test-preload-981850) Calling .GetSSHUsername
	I0908 14:38:00.595669 1152426 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/test-preload-981850/id_rsa Username:docker}
	I0908 14:38:00.686556 1152426 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 14:38:00.692397 1152426 info.go:137] Remote host: Buildroot 2025.02
	I0908 14:38:00.692439 1152426 filesync.go:126] Scanning /home/jenkins/minikube-integration/21508-1116714/.minikube/addons for local assets ...
	I0908 14:38:00.692547 1152426 filesync.go:126] Scanning /home/jenkins/minikube-integration/21508-1116714/.minikube/files for local assets ...
	I0908 14:38:00.692646 1152426 filesync.go:149] local asset: /home/jenkins/minikube-integration/21508-1116714/.minikube/files/etc/ssl/certs/11208752.pem -> 11208752.pem in /etc/ssl/certs
	I0908 14:38:00.692766 1152426 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0908 14:38:00.706483 1152426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/files/etc/ssl/certs/11208752.pem --> /etc/ssl/certs/11208752.pem (1708 bytes)
	I0908 14:38:00.741275 1152426 start.go:296] duration metric: took 150.904708ms for postStartSetup
	I0908 14:38:00.741335 1152426 fix.go:56] duration metric: took 19.916788633s for fixHost
	I0908 14:38:00.741364 1152426 main.go:141] libmachine: (test-preload-981850) Calling .GetSSHHostname
	I0908 14:38:00.744750 1152426 main.go:141] libmachine: (test-preload-981850) DBG | domain test-preload-981850 has defined MAC address 52:54:00:36:49:cb in network mk-test-preload-981850
	I0908 14:38:00.745091 1152426 main.go:141] libmachine: (test-preload-981850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:49:cb", ip: ""} in network mk-test-preload-981850: {Iface:virbr1 ExpiryTime:2025-09-08 15:37:52 +0000 UTC Type:0 Mac:52:54:00:36:49:cb Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:test-preload-981850 Clientid:01:52:54:00:36:49:cb}
	I0908 14:38:00.745120 1152426 main.go:141] libmachine: (test-preload-981850) DBG | domain test-preload-981850 has defined IP address 192.168.39.184 and MAC address 52:54:00:36:49:cb in network mk-test-preload-981850
	I0908 14:38:00.745380 1152426 main.go:141] libmachine: (test-preload-981850) Calling .GetSSHPort
	I0908 14:38:00.745625 1152426 main.go:141] libmachine: (test-preload-981850) Calling .GetSSHKeyPath
	I0908 14:38:00.745791 1152426 main.go:141] libmachine: (test-preload-981850) Calling .GetSSHKeyPath
	I0908 14:38:00.745924 1152426 main.go:141] libmachine: (test-preload-981850) Calling .GetSSHUsername
	I0908 14:38:00.746067 1152426 main.go:141] libmachine: Using SSH client type: native
	I0908 14:38:00.746332 1152426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.184 22 <nil> <nil>}
	I0908 14:38:00.746353 1152426 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0908 14:38:00.857750 1152426 main.go:141] libmachine: SSH cmd err, output: <nil>: 1757342280.826818247
	
	I0908 14:38:00.857778 1152426 fix.go:216] guest clock: 1757342280.826818247
	I0908 14:38:00.857790 1152426 fix.go:229] Guest: 2025-09-08 14:38:00.826818247 +0000 UTC Remote: 2025-09-08 14:38:00.741341837 +0000 UTC m=+23.938924171 (delta=85.47641ms)
	I0908 14:38:00.857816 1152426 fix.go:200] guest clock delta is within tolerance: 85.47641ms
	I0908 14:38:00.857823 1152426 start.go:83] releasing machines lock for "test-preload-981850", held for 20.033296914s
	I0908 14:38:00.857852 1152426 main.go:141] libmachine: (test-preload-981850) Calling .DriverName
	I0908 14:38:00.858199 1152426 main.go:141] libmachine: (test-preload-981850) Calling .GetIP
	I0908 14:38:00.861319 1152426 main.go:141] libmachine: (test-preload-981850) DBG | domain test-preload-981850 has defined MAC address 52:54:00:36:49:cb in network mk-test-preload-981850
	I0908 14:38:00.861791 1152426 main.go:141] libmachine: (test-preload-981850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:49:cb", ip: ""} in network mk-test-preload-981850: {Iface:virbr1 ExpiryTime:2025-09-08 15:37:52 +0000 UTC Type:0 Mac:52:54:00:36:49:cb Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:test-preload-981850 Clientid:01:52:54:00:36:49:cb}
	I0908 14:38:00.861821 1152426 main.go:141] libmachine: (test-preload-981850) DBG | domain test-preload-981850 has defined IP address 192.168.39.184 and MAC address 52:54:00:36:49:cb in network mk-test-preload-981850
	I0908 14:38:00.861964 1152426 main.go:141] libmachine: (test-preload-981850) Calling .DriverName
	I0908 14:38:00.862599 1152426 main.go:141] libmachine: (test-preload-981850) Calling .DriverName
	I0908 14:38:00.862791 1152426 main.go:141] libmachine: (test-preload-981850) Calling .DriverName
	I0908 14:38:00.862908 1152426 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0908 14:38:00.862973 1152426 main.go:141] libmachine: (test-preload-981850) Calling .GetSSHHostname
	I0908 14:38:00.863013 1152426 ssh_runner.go:195] Run: cat /version.json
	I0908 14:38:00.863050 1152426 main.go:141] libmachine: (test-preload-981850) Calling .GetSSHHostname
	I0908 14:38:00.865819 1152426 main.go:141] libmachine: (test-preload-981850) DBG | domain test-preload-981850 has defined MAC address 52:54:00:36:49:cb in network mk-test-preload-981850
	I0908 14:38:00.866179 1152426 main.go:141] libmachine: (test-preload-981850) DBG | domain test-preload-981850 has defined MAC address 52:54:00:36:49:cb in network mk-test-preload-981850
	I0908 14:38:00.866215 1152426 main.go:141] libmachine: (test-preload-981850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:49:cb", ip: ""} in network mk-test-preload-981850: {Iface:virbr1 ExpiryTime:2025-09-08 15:37:52 +0000 UTC Type:0 Mac:52:54:00:36:49:cb Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:test-preload-981850 Clientid:01:52:54:00:36:49:cb}
	I0908 14:38:00.866238 1152426 main.go:141] libmachine: (test-preload-981850) DBG | domain test-preload-981850 has defined IP address 192.168.39.184 and MAC address 52:54:00:36:49:cb in network mk-test-preload-981850
	I0908 14:38:00.866476 1152426 main.go:141] libmachine: (test-preload-981850) Calling .GetSSHPort
	I0908 14:38:00.866695 1152426 main.go:141] libmachine: (test-preload-981850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:49:cb", ip: ""} in network mk-test-preload-981850: {Iface:virbr1 ExpiryTime:2025-09-08 15:37:52 +0000 UTC Type:0 Mac:52:54:00:36:49:cb Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:test-preload-981850 Clientid:01:52:54:00:36:49:cb}
	I0908 14:38:00.866695 1152426 main.go:141] libmachine: (test-preload-981850) Calling .GetSSHKeyPath
	I0908 14:38:00.866724 1152426 main.go:141] libmachine: (test-preload-981850) DBG | domain test-preload-981850 has defined IP address 192.168.39.184 and MAC address 52:54:00:36:49:cb in network mk-test-preload-981850
	I0908 14:38:00.866843 1152426 main.go:141] libmachine: (test-preload-981850) Calling .GetSSHPort
	I0908 14:38:00.866910 1152426 main.go:141] libmachine: (test-preload-981850) Calling .GetSSHUsername
	I0908 14:38:00.867021 1152426 main.go:141] libmachine: (test-preload-981850) Calling .GetSSHKeyPath
	I0908 14:38:00.867087 1152426 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/test-preload-981850/id_rsa Username:docker}
	I0908 14:38:00.867139 1152426 main.go:141] libmachine: (test-preload-981850) Calling .GetSSHUsername
	I0908 14:38:00.867282 1152426 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/test-preload-981850/id_rsa Username:docker}
	I0908 14:38:00.974929 1152426 ssh_runner.go:195] Run: systemctl --version
	I0908 14:38:00.981629 1152426 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0908 14:38:01.130667 1152426 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0908 14:38:01.138804 1152426 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0908 14:38:01.138920 1152426 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 14:38:01.160795 1152426 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0908 14:38:01.160827 1152426 start.go:495] detecting cgroup driver to use...
	I0908 14:38:01.160909 1152426 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0908 14:38:01.181816 1152426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 14:38:01.201396 1152426 docker.go:218] disabling cri-docker service (if available) ...
	I0908 14:38:01.201474 1152426 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0908 14:38:01.219986 1152426 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0908 14:38:01.238437 1152426 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0908 14:38:01.394564 1152426 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0908 14:38:01.550991 1152426 docker.go:234] disabling docker service ...
	I0908 14:38:01.551083 1152426 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0908 14:38:01.568641 1152426 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0908 14:38:01.586517 1152426 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0908 14:38:01.811016 1152426 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0908 14:38:01.969767 1152426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 14:38:01.987441 1152426 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 14:38:02.015914 1152426 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0908 14:38:02.015989 1152426 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 14:38:02.032054 1152426 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0908 14:38:02.032145 1152426 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 14:38:02.046807 1152426 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 14:38:02.061967 1152426 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 14:38:02.077021 1152426 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 14:38:02.092990 1152426 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 14:38:02.107612 1152426 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 14:38:02.132978 1152426 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 14:38:02.147578 1152426 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 14:38:02.160223 1152426 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0908 14:38:02.160295 1152426 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0908 14:38:02.185871 1152426 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 14:38:02.199666 1152426 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 14:38:02.351444 1152426 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0908 14:38:02.475980 1152426 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0908 14:38:02.476102 1152426 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0908 14:38:02.482526 1152426 start.go:563] Will wait 60s for crictl version
	I0908 14:38:02.482603 1152426 ssh_runner.go:195] Run: which crictl
	I0908 14:38:02.487458 1152426 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 14:38:02.534612 1152426 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0908 14:38:02.534711 1152426 ssh_runner.go:195] Run: crio --version
	I0908 14:38:02.570804 1152426 ssh_runner.go:195] Run: crio --version
	I0908 14:38:02.608156 1152426 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I0908 14:38:02.609790 1152426 main.go:141] libmachine: (test-preload-981850) Calling .GetIP
	I0908 14:38:02.613026 1152426 main.go:141] libmachine: (test-preload-981850) DBG | domain test-preload-981850 has defined MAC address 52:54:00:36:49:cb in network mk-test-preload-981850
	I0908 14:38:02.613466 1152426 main.go:141] libmachine: (test-preload-981850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:49:cb", ip: ""} in network mk-test-preload-981850: {Iface:virbr1 ExpiryTime:2025-09-08 15:37:52 +0000 UTC Type:0 Mac:52:54:00:36:49:cb Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:test-preload-981850 Clientid:01:52:54:00:36:49:cb}
	I0908 14:38:02.613490 1152426 main.go:141] libmachine: (test-preload-981850) DBG | domain test-preload-981850 has defined IP address 192.168.39.184 and MAC address 52:54:00:36:49:cb in network mk-test-preload-981850
	I0908 14:38:02.613819 1152426 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0908 14:38:02.619242 1152426 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 14:38:02.637180 1152426 kubeadm.go:875] updating cluster {Name:test-preload-981850 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.32.0 ClusterName:test-preload-981850 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.184 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0908 14:38:02.637335 1152426 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0908 14:38:02.637394 1152426 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 14:38:02.683284 1152426 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I0908 14:38:02.683354 1152426 ssh_runner.go:195] Run: which lz4
	I0908 14:38:02.688409 1152426 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0908 14:38:02.693819 1152426 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0908 14:38:02.693880 1152426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I0908 14:38:04.441112 1152426 crio.go:462] duration metric: took 1.75278155s to copy over tarball
	I0908 14:38:04.441232 1152426 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0908 14:38:06.332266 1152426 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.890985519s)
	I0908 14:38:06.332308 1152426 crio.go:469] duration metric: took 1.891148569s to extract the tarball
	I0908 14:38:06.332316 1152426 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0908 14:38:06.374754 1152426 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 14:38:06.422980 1152426 crio.go:514] all images are preloaded for cri-o runtime.
	I0908 14:38:06.423009 1152426 cache_images.go:85] Images are preloaded, skipping loading
	I0908 14:38:06.423027 1152426 kubeadm.go:926] updating node { 192.168.39.184 8443 v1.32.0 crio true true} ...
	I0908 14:38:06.423188 1152426 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-981850 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.184
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-981850 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0908 14:38:06.423277 1152426 ssh_runner.go:195] Run: crio config
	I0908 14:38:06.474547 1152426 cni.go:84] Creating CNI manager for ""
	I0908 14:38:06.474573 1152426 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0908 14:38:06.474584 1152426 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0908 14:38:06.474608 1152426 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.184 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-981850 NodeName:test-preload-981850 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.184"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.184 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0908 14:38:06.474744 1152426 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.184
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-981850"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.184"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.184"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0908 14:38:06.474812 1152426 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I0908 14:38:06.488952 1152426 binaries.go:44] Found k8s binaries, skipping transfer
	I0908 14:38:06.489055 1152426 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0908 14:38:06.502493 1152426 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0908 14:38:06.527533 1152426 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0908 14:38:06.552383 1152426 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I0908 14:38:06.578581 1152426 ssh_runner.go:195] Run: grep 192.168.39.184	control-plane.minikube.internal$ /etc/hosts
	I0908 14:38:06.583360 1152426 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.184	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 14:38:06.600194 1152426 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 14:38:06.748571 1152426 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 14:38:06.778692 1152426 certs.go:68] Setting up /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/test-preload-981850 for IP: 192.168.39.184
	I0908 14:38:06.778719 1152426 certs.go:194] generating shared ca certs ...
	I0908 14:38:06.778736 1152426 certs.go:226] acquiring lock for ca certs: {Name:mk10dcd85eee4d8b0413bd848f61156bf964b6ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:38:06.778940 1152426 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21508-1116714/.minikube/ca.key
	I0908 14:38:06.779006 1152426 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21508-1116714/.minikube/proxy-client-ca.key
	I0908 14:38:06.779020 1152426 certs.go:256] generating profile certs ...
	I0908 14:38:06.779114 1152426 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/test-preload-981850/client.key
	I0908 14:38:06.779174 1152426 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/test-preload-981850/apiserver.key.41ab8838
	I0908 14:38:06.779208 1152426 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/test-preload-981850/proxy-client.key
	I0908 14:38:06.779331 1152426 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/1120875.pem (1338 bytes)
	W0908 14:38:06.779361 1152426 certs.go:480] ignoring /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/1120875_empty.pem, impossibly tiny 0 bytes
	I0908 14:38:06.779371 1152426 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca-key.pem (1679 bytes)
	I0908 14:38:06.779400 1152426 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca.pem (1082 bytes)
	I0908 14:38:06.779423 1152426 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/cert.pem (1123 bytes)
	I0908 14:38:06.779443 1152426 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/key.pem (1675 bytes)
	I0908 14:38:06.779482 1152426 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-1116714/.minikube/files/etc/ssl/certs/11208752.pem (1708 bytes)
	I0908 14:38:06.780139 1152426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0908 14:38:06.820118 1152426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0908 14:38:06.872610 1152426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0908 14:38:06.908080 1152426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0908 14:38:06.942434 1152426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/test-preload-981850/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0908 14:38:06.975800 1152426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/test-preload-981850/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0908 14:38:07.008510 1152426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/test-preload-981850/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0908 14:38:07.040741 1152426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/test-preload-981850/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0908 14:38:07.074180 1152426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/files/etc/ssl/certs/11208752.pem --> /usr/share/ca-certificates/11208752.pem (1708 bytes)
	I0908 14:38:07.108824 1152426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0908 14:38:07.143805 1152426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/1120875.pem --> /usr/share/ca-certificates/1120875.pem (1338 bytes)
	I0908 14:38:07.177304 1152426 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0908 14:38:07.201471 1152426 ssh_runner.go:195] Run: openssl version
	I0908 14:38:07.208925 1152426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11208752.pem && ln -fs /usr/share/ca-certificates/11208752.pem /etc/ssl/certs/11208752.pem"
	I0908 14:38:07.223820 1152426 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11208752.pem
	I0908 14:38:07.229944 1152426 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  8 13:46 /usr/share/ca-certificates/11208752.pem
	I0908 14:38:07.230021 1152426 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11208752.pem
	I0908 14:38:07.238418 1152426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11208752.pem /etc/ssl/certs/3ec20f2e.0"
	I0908 14:38:07.253773 1152426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0908 14:38:07.268515 1152426 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0908 14:38:07.274412 1152426 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  8 13:35 /usr/share/ca-certificates/minikubeCA.pem
	I0908 14:38:07.274480 1152426 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0908 14:38:07.282297 1152426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0908 14:38:07.296528 1152426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1120875.pem && ln -fs /usr/share/ca-certificates/1120875.pem /etc/ssl/certs/1120875.pem"
	I0908 14:38:07.311381 1152426 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1120875.pem
	I0908 14:38:07.317566 1152426 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  8 13:46 /usr/share/ca-certificates/1120875.pem
	I0908 14:38:07.317657 1152426 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1120875.pem
	I0908 14:38:07.325887 1152426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1120875.pem /etc/ssl/certs/51391683.0"
	I0908 14:38:07.341438 1152426 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0908 14:38:07.347914 1152426 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0908 14:38:07.356733 1152426 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0908 14:38:07.365537 1152426 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0908 14:38:07.374460 1152426 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0908 14:38:07.382966 1152426 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0908 14:38:07.391453 1152426 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0908 14:38:07.399921 1152426 kubeadm.go:392] StartCluster: {Name:test-preload-981850 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
32.0 ClusterName:test-preload-981850 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.184 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 14:38:07.400019 1152426 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0908 14:38:07.400143 1152426 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0908 14:38:07.444264 1152426 cri.go:89] found id: ""
	I0908 14:38:07.444359 1152426 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0908 14:38:07.458742 1152426 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0908 14:38:07.458773 1152426 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0908 14:38:07.458832 1152426 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0908 14:38:07.479090 1152426 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0908 14:38:07.479642 1152426 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-981850" does not appear in /home/jenkins/minikube-integration/21508-1116714/kubeconfig
	I0908 14:38:07.479862 1152426 kubeconfig.go:62] /home/jenkins/minikube-integration/21508-1116714/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-981850" cluster setting kubeconfig missing "test-preload-981850" context setting]
	I0908 14:38:07.480238 1152426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-1116714/kubeconfig: {Name:mk93422b0007d912fa8f198f71d62d01a418d566 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:38:07.489932 1152426 kapi.go:59] client config for test-preload-981850: &rest.Config{Host:"https://192.168.39.184:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/test-preload-981850/client.crt", KeyFile:"/home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/test-preload-981850/client.key", CAFile:"/home/jenkins/minikube-integration/21508-1116714/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]
uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25a3920), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0908 14:38:07.490448 1152426 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0908 14:38:07.490467 1152426 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0908 14:38:07.490473 1152426 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0908 14:38:07.490478 1152426 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0908 14:38:07.490484 1152426 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0908 14:38:07.490879 1152426 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0908 14:38:07.507956 1152426 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.39.184
	I0908 14:38:07.508002 1152426 kubeadm.go:1152] stopping kube-system containers ...
	I0908 14:38:07.508023 1152426 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0908 14:38:07.508122 1152426 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0908 14:38:07.566892 1152426 cri.go:89] found id: ""
	I0908 14:38:07.566985 1152426 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0908 14:38:07.594400 1152426 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0908 14:38:07.607806 1152426 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0908 14:38:07.607834 1152426 kubeadm.go:157] found existing configuration files:
	
	I0908 14:38:07.607887 1152426 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0908 14:38:07.620356 1152426 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0908 14:38:07.620422 1152426 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0908 14:38:07.635890 1152426 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0908 14:38:07.648684 1152426 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0908 14:38:07.648755 1152426 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0908 14:38:07.665298 1152426 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0908 14:38:07.678862 1152426 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0908 14:38:07.678941 1152426 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0908 14:38:07.692900 1152426 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0908 14:38:07.705280 1152426 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0908 14:38:07.705352 1152426 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0908 14:38:07.718354 1152426 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0908 14:38:07.731883 1152426 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0908 14:38:07.797198 1152426 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0908 14:38:09.129949 1152426 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.332709783s)
	I0908 14:38:09.129987 1152426 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0908 14:38:09.391051 1152426 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0908 14:38:09.476905 1152426 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0908 14:38:09.583759 1152426 api_server.go:52] waiting for apiserver process to appear ...
	I0908 14:38:09.583845 1152426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 14:38:10.084952 1152426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 14:38:10.584852 1152426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 14:38:11.084002 1152426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 14:38:11.114872 1152426 api_server.go:72] duration metric: took 1.531109088s to wait for apiserver process to appear ...
	I0908 14:38:11.114906 1152426 api_server.go:88] waiting for apiserver healthz status ...
	I0908 14:38:11.114928 1152426 api_server.go:253] Checking apiserver healthz at https://192.168.39.184:8443/healthz ...
	I0908 14:38:13.744630 1152426 api_server.go:279] https://192.168.39.184:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0908 14:38:13.744677 1152426 api_server.go:103] status: https://192.168.39.184:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0908 14:38:13.744695 1152426 api_server.go:253] Checking apiserver healthz at https://192.168.39.184:8443/healthz ...
	I0908 14:38:13.791393 1152426 api_server.go:279] https://192.168.39.184:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0908 14:38:13.791433 1152426 api_server.go:103] status: https://192.168.39.184:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0908 14:38:14.115968 1152426 api_server.go:253] Checking apiserver healthz at https://192.168.39.184:8443/healthz ...
	I0908 14:38:14.121203 1152426 api_server.go:279] https://192.168.39.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0908 14:38:14.121254 1152426 api_server.go:103] status: https://192.168.39.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0908 14:38:14.615934 1152426 api_server.go:253] Checking apiserver healthz at https://192.168.39.184:8443/healthz ...
	I0908 14:38:14.622692 1152426 api_server.go:279] https://192.168.39.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0908 14:38:14.622724 1152426 api_server.go:103] status: https://192.168.39.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0908 14:38:15.115953 1152426 api_server.go:253] Checking apiserver healthz at https://192.168.39.184:8443/healthz ...
	I0908 14:38:15.126116 1152426 api_server.go:279] https://192.168.39.184:8443/healthz returned 200:
	ok
	I0908 14:38:15.137207 1152426 api_server.go:141] control plane version: v1.32.0
	I0908 14:38:15.137241 1152426 api_server.go:131] duration metric: took 4.022327436s to wait for apiserver health ...
	I0908 14:38:15.137252 1152426 cni.go:84] Creating CNI manager for ""
	I0908 14:38:15.137258 1152426 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0908 14:38:15.139385 1152426 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0908 14:38:15.140903 1152426 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0908 14:38:15.176298 1152426 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0908 14:38:15.215421 1152426 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 14:38:15.229089 1152426 system_pods.go:59] 7 kube-system pods found
	I0908 14:38:15.229146 1152426 system_pods.go:61] "coredns-668d6bf9bc-r5zjb" [4af1d51b-008f-4ace-b14c-f544789ed8bb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:38:15.229159 1152426 system_pods.go:61] "etcd-test-preload-981850" [c457a0f6-35aa-4684-9bdd-3333370be485] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 14:38:15.229167 1152426 system_pods.go:61] "kube-apiserver-test-preload-981850" [fa7cbcce-2550-41b6-b56f-85a8b501cd50] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 14:38:15.229172 1152426 system_pods.go:61] "kube-controller-manager-test-preload-981850" [d3b91f18-4e75-4f34-bdeb-3514560c36ed] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 14:38:15.229178 1152426 system_pods.go:61] "kube-proxy-xkcwm" [67c30a26-8f97-444b-9d01-cc66ae501725] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0908 14:38:15.229183 1152426 system_pods.go:61] "kube-scheduler-test-preload-981850" [1348cd42-3d6b-486a-b616-09b1e8e92558] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 14:38:15.229188 1152426 system_pods.go:61] "storage-provisioner" [a823029c-84cf-4db6-8528-00f6e5fc4550] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 14:38:15.229197 1152426 system_pods.go:74] duration metric: took 13.747392ms to wait for pod list to return data ...
	I0908 14:38:15.229206 1152426 node_conditions.go:102] verifying NodePressure condition ...
	I0908 14:38:15.242866 1152426 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0908 14:38:15.242913 1152426 node_conditions.go:123] node cpu capacity is 2
	I0908 14:38:15.242932 1152426 node_conditions.go:105] duration metric: took 13.71966ms to run NodePressure ...
	I0908 14:38:15.242960 1152426 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0908 14:38:15.569068 1152426 kubeadm.go:720] waiting for restarted kubelet to initialise ...
	I0908 14:38:15.573192 1152426 kubeadm.go:735] kubelet initialised
	I0908 14:38:15.573221 1152426 kubeadm.go:736] duration metric: took 4.11844ms waiting for restarted kubelet to initialise ...
	I0908 14:38:15.573241 1152426 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0908 14:38:15.603583 1152426 ops.go:34] apiserver oom_adj: -16
	I0908 14:38:15.603618 1152426 kubeadm.go:593] duration metric: took 8.144837945s to restartPrimaryControlPlane
	I0908 14:38:15.603633 1152426 kubeadm.go:394] duration metric: took 8.203722563s to StartCluster
	I0908 14:38:15.603671 1152426 settings.go:142] acquiring lock: {Name:mkc208e3a70732deaf67c191918f201f73e82457 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:38:15.603772 1152426 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21508-1116714/kubeconfig
	I0908 14:38:15.604416 1152426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-1116714/kubeconfig: {Name:mk93422b0007d912fa8f198f71d62d01a418d566 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:38:15.604713 1152426 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.184 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0908 14:38:15.604822 1152426 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0908 14:38:15.604913 1152426 config.go:182] Loaded profile config "test-preload-981850": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0908 14:38:15.604936 1152426 addons.go:69] Setting storage-provisioner=true in profile "test-preload-981850"
	I0908 14:38:15.604958 1152426 addons.go:238] Setting addon storage-provisioner=true in "test-preload-981850"
	W0908 14:38:15.604967 1152426 addons.go:247] addon storage-provisioner should already be in state true
	I0908 14:38:15.604971 1152426 addons.go:69] Setting default-storageclass=true in profile "test-preload-981850"
	I0908 14:38:15.604998 1152426 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-981850"
	I0908 14:38:15.605001 1152426 host.go:66] Checking if "test-preload-981850" exists ...
	I0908 14:38:15.605477 1152426 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 14:38:15.605520 1152426 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 14:38:15.605549 1152426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 14:38:15.605569 1152426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 14:38:15.606482 1152426 out.go:179] * Verifying Kubernetes components...
	I0908 14:38:15.608291 1152426 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 14:38:15.628381 1152426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46021
	I0908 14:38:15.628477 1152426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37613
	I0908 14:38:15.629049 1152426 main.go:141] libmachine: () Calling .GetVersion
	I0908 14:38:15.629100 1152426 main.go:141] libmachine: () Calling .GetVersion
	I0908 14:38:15.629584 1152426 main.go:141] libmachine: Using API Version  1
	I0908 14:38:15.629604 1152426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 14:38:15.629738 1152426 main.go:141] libmachine: Using API Version  1
	I0908 14:38:15.629770 1152426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 14:38:15.630018 1152426 main.go:141] libmachine: () Calling .GetMachineName
	I0908 14:38:15.630167 1152426 main.go:141] libmachine: () Calling .GetMachineName
	I0908 14:38:15.630376 1152426 main.go:141] libmachine: (test-preload-981850) Calling .GetState
	I0908 14:38:15.630614 1152426 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 14:38:15.630666 1152426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 14:38:15.632744 1152426 kapi.go:59] client config for test-preload-981850: &rest.Config{Host:"https://192.168.39.184:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/test-preload-981850/client.crt", KeyFile:"/home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/test-preload-981850/client.key", CAFile:"/home/jenkins/minikube-integration/21508-1116714/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]
uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25a3920), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0908 14:38:15.633031 1152426 addons.go:238] Setting addon default-storageclass=true in "test-preload-981850"
	W0908 14:38:15.633054 1152426 addons.go:247] addon default-storageclass should already be in state true
	I0908 14:38:15.633084 1152426 host.go:66] Checking if "test-preload-981850" exists ...
	I0908 14:38:15.633379 1152426 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 14:38:15.633429 1152426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 14:38:15.650714 1152426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37805
	I0908 14:38:15.651245 1152426 main.go:141] libmachine: () Calling .GetVersion
	I0908 14:38:15.651857 1152426 main.go:141] libmachine: Using API Version  1
	I0908 14:38:15.651895 1152426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 14:38:15.652386 1152426 main.go:141] libmachine: () Calling .GetMachineName
	I0908 14:38:15.653264 1152426 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 14:38:15.653324 1152426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 14:38:15.655158 1152426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45567
	I0908 14:38:15.655728 1152426 main.go:141] libmachine: () Calling .GetVersion
	I0908 14:38:15.656368 1152426 main.go:141] libmachine: Using API Version  1
	I0908 14:38:15.656398 1152426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 14:38:15.656957 1152426 main.go:141] libmachine: () Calling .GetMachineName
	I0908 14:38:15.657447 1152426 main.go:141] libmachine: (test-preload-981850) Calling .GetState
	I0908 14:38:15.659892 1152426 main.go:141] libmachine: (test-preload-981850) Calling .DriverName
	I0908 14:38:15.661526 1152426 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0908 14:38:15.662960 1152426 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 14:38:15.662989 1152426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0908 14:38:15.663021 1152426 main.go:141] libmachine: (test-preload-981850) Calling .GetSSHHostname
	I0908 14:38:15.666340 1152426 main.go:141] libmachine: (test-preload-981850) DBG | domain test-preload-981850 has defined MAC address 52:54:00:36:49:cb in network mk-test-preload-981850
	I0908 14:38:15.666856 1152426 main.go:141] libmachine: (test-preload-981850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:49:cb", ip: ""} in network mk-test-preload-981850: {Iface:virbr1 ExpiryTime:2025-09-08 15:37:52 +0000 UTC Type:0 Mac:52:54:00:36:49:cb Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:test-preload-981850 Clientid:01:52:54:00:36:49:cb}
	I0908 14:38:15.666891 1152426 main.go:141] libmachine: (test-preload-981850) DBG | domain test-preload-981850 has defined IP address 192.168.39.184 and MAC address 52:54:00:36:49:cb in network mk-test-preload-981850
	I0908 14:38:15.667017 1152426 main.go:141] libmachine: (test-preload-981850) Calling .GetSSHPort
	I0908 14:38:15.667242 1152426 main.go:141] libmachine: (test-preload-981850) Calling .GetSSHKeyPath
	I0908 14:38:15.667465 1152426 main.go:141] libmachine: (test-preload-981850) Calling .GetSSHUsername
	I0908 14:38:15.667639 1152426 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/test-preload-981850/id_rsa Username:docker}
	I0908 14:38:15.672667 1152426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34435
	I0908 14:38:15.673197 1152426 main.go:141] libmachine: () Calling .GetVersion
	I0908 14:38:15.673807 1152426 main.go:141] libmachine: Using API Version  1
	I0908 14:38:15.673838 1152426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 14:38:15.674235 1152426 main.go:141] libmachine: () Calling .GetMachineName
	I0908 14:38:15.674513 1152426 main.go:141] libmachine: (test-preload-981850) Calling .GetState
	I0908 14:38:15.676774 1152426 main.go:141] libmachine: (test-preload-981850) Calling .DriverName
	I0908 14:38:15.677025 1152426 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0908 14:38:15.677046 1152426 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0908 14:38:15.677065 1152426 main.go:141] libmachine: (test-preload-981850) Calling .GetSSHHostname
	I0908 14:38:15.680082 1152426 main.go:141] libmachine: (test-preload-981850) DBG | domain test-preload-981850 has defined MAC address 52:54:00:36:49:cb in network mk-test-preload-981850
	I0908 14:38:15.680530 1152426 main.go:141] libmachine: (test-preload-981850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:49:cb", ip: ""} in network mk-test-preload-981850: {Iface:virbr1 ExpiryTime:2025-09-08 15:37:52 +0000 UTC Type:0 Mac:52:54:00:36:49:cb Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:test-preload-981850 Clientid:01:52:54:00:36:49:cb}
	I0908 14:38:15.680562 1152426 main.go:141] libmachine: (test-preload-981850) DBG | domain test-preload-981850 has defined IP address 192.168.39.184 and MAC address 52:54:00:36:49:cb in network mk-test-preload-981850
	I0908 14:38:15.680754 1152426 main.go:141] libmachine: (test-preload-981850) Calling .GetSSHPort
	I0908 14:38:15.680977 1152426 main.go:141] libmachine: (test-preload-981850) Calling .GetSSHKeyPath
	I0908 14:38:15.681184 1152426 main.go:141] libmachine: (test-preload-981850) Calling .GetSSHUsername
	I0908 14:38:15.681351 1152426 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/test-preload-981850/id_rsa Username:docker}
	I0908 14:38:15.941325 1152426 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 14:38:15.972423 1152426 node_ready.go:35] waiting up to 6m0s for node "test-preload-981850" to be "Ready" ...
	I0908 14:38:15.977048 1152426 node_ready.go:49] node "test-preload-981850" is "Ready"
	I0908 14:38:15.977085 1152426 node_ready.go:38] duration metric: took 4.610071ms for node "test-preload-981850" to be "Ready" ...
	I0908 14:38:15.977109 1152426 api_server.go:52] waiting for apiserver process to appear ...
	I0908 14:38:15.977186 1152426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 14:38:16.049499 1152426 api_server.go:72] duration metric: took 444.7352ms to wait for apiserver process to appear ...
	I0908 14:38:16.049540 1152426 api_server.go:88] waiting for apiserver healthz status ...
	I0908 14:38:16.049570 1152426 api_server.go:253] Checking apiserver healthz at https://192.168.39.184:8443/healthz ...
	I0908 14:38:16.062491 1152426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 14:38:16.073353 1152426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0908 14:38:16.075698 1152426 api_server.go:279] https://192.168.39.184:8443/healthz returned 200:
	ok
	I0908 14:38:16.077916 1152426 api_server.go:141] control plane version: v1.32.0
	I0908 14:38:16.077945 1152426 api_server.go:131] duration metric: took 28.396571ms to wait for apiserver health ...
	I0908 14:38:16.077957 1152426 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 14:38:16.094261 1152426 system_pods.go:59] 7 kube-system pods found
	I0908 14:38:16.094309 1152426 system_pods.go:61] "coredns-668d6bf9bc-r5zjb" [4af1d51b-008f-4ace-b14c-f544789ed8bb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:38:16.094321 1152426 system_pods.go:61] "etcd-test-preload-981850" [c457a0f6-35aa-4684-9bdd-3333370be485] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 14:38:16.094339 1152426 system_pods.go:61] "kube-apiserver-test-preload-981850" [fa7cbcce-2550-41b6-b56f-85a8b501cd50] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 14:38:16.094347 1152426 system_pods.go:61] "kube-controller-manager-test-preload-981850" [d3b91f18-4e75-4f34-bdeb-3514560c36ed] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 14:38:16.094359 1152426 system_pods.go:61] "kube-proxy-xkcwm" [67c30a26-8f97-444b-9d01-cc66ae501725] Running
	I0908 14:38:16.094371 1152426 system_pods.go:61] "kube-scheduler-test-preload-981850" [1348cd42-3d6b-486a-b616-09b1e8e92558] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 14:38:16.094380 1152426 system_pods.go:61] "storage-provisioner" [a823029c-84cf-4db6-8528-00f6e5fc4550] Running
	I0908 14:38:16.094395 1152426 system_pods.go:74] duration metric: took 16.428733ms to wait for pod list to return data ...
	I0908 14:38:16.094409 1152426 default_sa.go:34] waiting for default service account to be created ...
	I0908 14:38:16.111113 1152426 default_sa.go:45] found service account: "default"
	I0908 14:38:16.111154 1152426 default_sa.go:55] duration metric: took 16.737283ms for default service account to be created ...
	I0908 14:38:16.111165 1152426 system_pods.go:116] waiting for k8s-apps to be running ...
	I0908 14:38:16.197502 1152426 system_pods.go:86] 7 kube-system pods found
	I0908 14:38:16.197541 1152426 system_pods.go:89] "coredns-668d6bf9bc-r5zjb" [4af1d51b-008f-4ace-b14c-f544789ed8bb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:38:16.197550 1152426 system_pods.go:89] "etcd-test-preload-981850" [c457a0f6-35aa-4684-9bdd-3333370be485] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 14:38:16.197558 1152426 system_pods.go:89] "kube-apiserver-test-preload-981850" [fa7cbcce-2550-41b6-b56f-85a8b501cd50] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 14:38:16.197563 1152426 system_pods.go:89] "kube-controller-manager-test-preload-981850" [d3b91f18-4e75-4f34-bdeb-3514560c36ed] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 14:38:16.197567 1152426 system_pods.go:89] "kube-proxy-xkcwm" [67c30a26-8f97-444b-9d01-cc66ae501725] Running
	I0908 14:38:16.197572 1152426 system_pods.go:89] "kube-scheduler-test-preload-981850" [1348cd42-3d6b-486a-b616-09b1e8e92558] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 14:38:16.197580 1152426 system_pods.go:89] "storage-provisioner" [a823029c-84cf-4db6-8528-00f6e5fc4550] Running
	I0908 14:38:16.197587 1152426 system_pods.go:126] duration metric: took 86.417015ms to wait for k8s-apps to be running ...
	I0908 14:38:16.197595 1152426 system_svc.go:44] waiting for kubelet service to be running ....
	I0908 14:38:16.197645 1152426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 14:38:17.374067 1152426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.311523393s)
	I0908 14:38:17.374140 1152426 main.go:141] libmachine: Making call to close driver server
	I0908 14:38:17.374154 1152426 main.go:141] libmachine: (test-preload-981850) Calling .Close
	I0908 14:38:17.374173 1152426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.300781018s)
	I0908 14:38:17.374219 1152426 main.go:141] libmachine: Making call to close driver server
	I0908 14:38:17.374235 1152426 main.go:141] libmachine: (test-preload-981850) Calling .Close
	I0908 14:38:17.374279 1152426 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.17661102s)
	I0908 14:38:17.374310 1152426 system_svc.go:56] duration metric: took 1.176710013s WaitForService to wait for kubelet
	I0908 14:38:17.374343 1152426 kubeadm.go:578] duration metric: took 1.76956723s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 14:38:17.374372 1152426 node_conditions.go:102] verifying NodePressure condition ...
	I0908 14:38:17.374491 1152426 main.go:141] libmachine: Successfully made call to close driver server
	I0908 14:38:17.374521 1152426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 14:38:17.374532 1152426 main.go:141] libmachine: Making call to close driver server
	I0908 14:38:17.374548 1152426 main.go:141] libmachine: (test-preload-981850) Calling .Close
	I0908 14:38:17.374682 1152426 main.go:141] libmachine: (test-preload-981850) DBG | Closing plugin on server side
	I0908 14:38:17.374862 1152426 main.go:141] libmachine: (test-preload-981850) DBG | Closing plugin on server side
	I0908 14:38:17.374880 1152426 main.go:141] libmachine: Successfully made call to close driver server
	I0908 14:38:17.374892 1152426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 14:38:17.374905 1152426 main.go:141] libmachine: Making call to close driver server
	I0908 14:38:17.374913 1152426 main.go:141] libmachine: (test-preload-981850) Calling .Close
	I0908 14:38:17.374927 1152426 main.go:141] libmachine: Successfully made call to close driver server
	I0908 14:38:17.374978 1152426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 14:38:17.375125 1152426 main.go:141] libmachine: Successfully made call to close driver server
	I0908 14:38:17.375139 1152426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 14:38:17.375182 1152426 main.go:141] libmachine: (test-preload-981850) DBG | Closing plugin on server side
	I0908 14:38:17.381352 1152426 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0908 14:38:17.381374 1152426 node_conditions.go:123] node cpu capacity is 2
	I0908 14:38:17.381385 1152426 node_conditions.go:105] duration metric: took 7.007934ms to run NodePressure ...
	I0908 14:38:17.381398 1152426 start.go:241] waiting for startup goroutines ...
	I0908 14:38:17.385330 1152426 main.go:141] libmachine: Making call to close driver server
	I0908 14:38:17.385353 1152426 main.go:141] libmachine: (test-preload-981850) Calling .Close
	I0908 14:38:17.385707 1152426 main.go:141] libmachine: Successfully made call to close driver server
	I0908 14:38:17.385729 1152426 main.go:141] libmachine: (test-preload-981850) DBG | Closing plugin on server side
	I0908 14:38:17.385730 1152426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 14:38:17.387295 1152426 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0908 14:38:17.388585 1152426 addons.go:514] duration metric: took 1.783779728s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0908 14:38:17.388622 1152426 start.go:246] waiting for cluster config update ...
	I0908 14:38:17.388644 1152426 start.go:255] writing updated cluster config ...
	I0908 14:38:17.388963 1152426 ssh_runner.go:195] Run: rm -f paused
	I0908 14:38:17.397129 1152426 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 14:38:17.397823 1152426 kapi.go:59] client config for test-preload-981850: &rest.Config{Host:"https://192.168.39.184:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/test-preload-981850/client.crt", KeyFile:"/home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/test-preload-981850/client.key", CAFile:"/home/jenkins/minikube-integration/21508-1116714/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]
uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25a3920), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0908 14:38:17.401541 1152426 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-r5zjb" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 14:38:18.408060 1152426 pod_ready.go:94] pod "coredns-668d6bf9bc-r5zjb" is "Ready"
	I0908 14:38:18.408091 1152426 pod_ready.go:86] duration metric: took 1.006513932s for pod "coredns-668d6bf9bc-r5zjb" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 14:38:18.412066 1152426 pod_ready.go:83] waiting for pod "etcd-test-preload-981850" in "kube-system" namespace to be "Ready" or be gone ...
	W0908 14:38:20.418923 1152426 pod_ready.go:104] pod "etcd-test-preload-981850" is not "Ready", error: <nil>
	W0908 14:38:22.419347 1152426 pod_ready.go:104] pod "etcd-test-preload-981850" is not "Ready", error: <nil>
	W0908 14:38:24.421424 1152426 pod_ready.go:104] pod "etcd-test-preload-981850" is not "Ready", error: <nil>
	W0908 14:38:26.919620 1152426 pod_ready.go:104] pod "etcd-test-preload-981850" is not "Ready", error: <nil>
	I0908 14:38:27.419304 1152426 pod_ready.go:94] pod "etcd-test-preload-981850" is "Ready"
	I0908 14:38:27.419337 1152426 pod_ready.go:86] duration metric: took 9.007232877s for pod "etcd-test-preload-981850" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 14:38:27.422650 1152426 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-981850" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 14:38:27.429618 1152426 pod_ready.go:94] pod "kube-apiserver-test-preload-981850" is "Ready"
	I0908 14:38:27.429667 1152426 pod_ready.go:86] duration metric: took 6.985941ms for pod "kube-apiserver-test-preload-981850" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 14:38:27.433539 1152426 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-981850" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 14:38:27.940305 1152426 pod_ready.go:94] pod "kube-controller-manager-test-preload-981850" is "Ready"
	I0908 14:38:27.940334 1152426 pod_ready.go:86] duration metric: took 506.764795ms for pod "kube-controller-manager-test-preload-981850" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 14:38:27.943243 1152426 pod_ready.go:83] waiting for pod "kube-proxy-xkcwm" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 14:38:28.016612 1152426 pod_ready.go:94] pod "kube-proxy-xkcwm" is "Ready"
	I0908 14:38:28.016646 1152426 pod_ready.go:86] duration metric: took 73.365716ms for pod "kube-proxy-xkcwm" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 14:38:28.217246 1152426 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-981850" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 14:38:28.616738 1152426 pod_ready.go:94] pod "kube-scheduler-test-preload-981850" is "Ready"
	I0908 14:38:28.616774 1152426 pod_ready.go:86] duration metric: took 399.494595ms for pod "kube-scheduler-test-preload-981850" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 14:38:28.616793 1152426 pod_ready.go:40] duration metric: took 11.219611812s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 14:38:28.666957 1152426 start.go:617] kubectl: 1.33.2, cluster: 1.32.0 (minor skew: 1)
	I0908 14:38:28.669404 1152426 out.go:179] * Done! kubectl is now configured to use "test-preload-981850" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 08 14:38:29 test-preload-981850 crio[840]: time="2025-09-08 14:38:29.744546253Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=228354ca-d018-4e0d-b8b2-037cda0116f6 name=/runtime.v1.RuntimeService/Version
	Sep 08 14:38:29 test-preload-981850 crio[840]: time="2025-09-08 14:38:29.746292916Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b2c8da31-4bda-49de-9b73-00a4a632658b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 14:38:29 test-preload-981850 crio[840]: time="2025-09-08 14:38:29.747205586Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757342309747116312,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b2c8da31-4bda-49de-9b73-00a4a632658b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 14:38:29 test-preload-981850 crio[840]: time="2025-09-08 14:38:29.748670813Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dbe040ef-0e72-4574-909c-1e830ea06013 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 14:38:29 test-preload-981850 crio[840]: time="2025-09-08 14:38:29.749071281Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dbe040ef-0e72-4574-909c-1e830ea06013 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 14:38:29 test-preload-981850 crio[840]: time="2025-09-08 14:38:29.749421173Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7d64185eb09007fef310af288e997cdac5a819cf94e1abbdf109d999a80d9c48,PodSandboxId:354ae74c5e51e74929cc08d542ce8725b423d9096c5449bd320e8bd15056f237,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1757342296803182875,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-r5zjb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4af1d51b-008f-4ace-b14c-f544789ed8bb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5afb3746333e643e875a86318a9c30fe406942ef36b5f3a27e7b9896d36d22c6,PodSandboxId:442f3cb54fcc2188db3e468bafc6be8f713f8f5c7b134817546443c0803f07ef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1757342295127264255,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xkcwm,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 67c30a26-8f97-444b-9d01-cc66ae501725,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:015940b0a3acbc678810937b4628acd9e5bafd08188459b592858c4f9bbd433e,PodSandboxId:ce0fba4473f72119f0055621a290a62d65c1d2fa8c40850e186f1ba561750852,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1757342294995047846,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8
23029c-84cf-4db6-8528-00f6e5fc4550,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a81caa151975677f33e1746d21fe04a0da03381b970cd1a01080a802e6cd5c75,PodSandboxId:9709481e832f80ab02c6768b26235bf4b80de4723cb48876fea8fed663668b81,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1757342290678602229,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-981850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c7bbbdd400306ea43a9c16d89e2fef4,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8af164dea313df4f1276e37b3d3b200c5634668c133c5b97866c8fc801d18508,PodSandboxId:1bdd512aaaebaf267bacfb135f3ad3045965fbe6ed59f69fb461651b44371809,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1757342290645023173,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-981850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7a48ddf84f496c2c5def32
2a83c149c,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7824f9b0c37b4425ab56a976a47443c8ce28314b08bc5a25e8a4aa5f03ee447,PodSandboxId:4d2b446a31863592cb486944c54e7e4c566b8427bca70468d0aeb0c1533d23cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1757342290631487363,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-981850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c63d8117cfec59cd96a42ae52a6e10f,}
,Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef8b7d7a1fc575d3fe4161575a7cc833eb34de121fb8033d271e4753291c2010,PodSandboxId:1909b05bc2adf6c21400d292bb39843c7f02c4b29af017e9f6b9458b2c69881e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1757342290571937876,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-981850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b02738106d93bda29a76901af573e0ea,},Annotation
s:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dbe040ef-0e72-4574-909c-1e830ea06013 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 14:38:29 test-preload-981850 crio[840]: time="2025-09-08 14:38:29.793500556Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1034e298-0960-426a-9918-b962194b66b6 name=/runtime.v1.RuntimeService/Version
	Sep 08 14:38:29 test-preload-981850 crio[840]: time="2025-09-08 14:38:29.793592873Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1034e298-0960-426a-9918-b962194b66b6 name=/runtime.v1.RuntimeService/Version
	Sep 08 14:38:29 test-preload-981850 crio[840]: time="2025-09-08 14:38:29.794713593Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8ca1579f-b45c-4f82-a869-764a4cf28090 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 14:38:29 test-preload-981850 crio[840]: time="2025-09-08 14:38:29.795204320Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757342309795180394,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8ca1579f-b45c-4f82-a869-764a4cf28090 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 14:38:29 test-preload-981850 crio[840]: time="2025-09-08 14:38:29.795942269Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8f733bdc-eed3-4d7d-afc0-cb1a2277bd1c name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 14:38:29 test-preload-981850 crio[840]: time="2025-09-08 14:38:29.795998976Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8f733bdc-eed3-4d7d-afc0-cb1a2277bd1c name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 14:38:29 test-preload-981850 crio[840]: time="2025-09-08 14:38:29.796150275Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7d64185eb09007fef310af288e997cdac5a819cf94e1abbdf109d999a80d9c48,PodSandboxId:354ae74c5e51e74929cc08d542ce8725b423d9096c5449bd320e8bd15056f237,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1757342296803182875,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-r5zjb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4af1d51b-008f-4ace-b14c-f544789ed8bb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5afb3746333e643e875a86318a9c30fe406942ef36b5f3a27e7b9896d36d22c6,PodSandboxId:442f3cb54fcc2188db3e468bafc6be8f713f8f5c7b134817546443c0803f07ef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1757342295127264255,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xkcwm,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 67c30a26-8f97-444b-9d01-cc66ae501725,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:015940b0a3acbc678810937b4628acd9e5bafd08188459b592858c4f9bbd433e,PodSandboxId:ce0fba4473f72119f0055621a290a62d65c1d2fa8c40850e186f1ba561750852,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1757342294995047846,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8
23029c-84cf-4db6-8528-00f6e5fc4550,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a81caa151975677f33e1746d21fe04a0da03381b970cd1a01080a802e6cd5c75,PodSandboxId:9709481e832f80ab02c6768b26235bf4b80de4723cb48876fea8fed663668b81,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1757342290678602229,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-981850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c7bbbdd400306ea43a9c16d89e2fef4,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8af164dea313df4f1276e37b3d3b200c5634668c133c5b97866c8fc801d18508,PodSandboxId:1bdd512aaaebaf267bacfb135f3ad3045965fbe6ed59f69fb461651b44371809,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1757342290645023173,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-981850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7a48ddf84f496c2c5def32
2a83c149c,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7824f9b0c37b4425ab56a976a47443c8ce28314b08bc5a25e8a4aa5f03ee447,PodSandboxId:4d2b446a31863592cb486944c54e7e4c566b8427bca70468d0aeb0c1533d23cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1757342290631487363,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-981850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c63d8117cfec59cd96a42ae52a6e10f,}
,Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef8b7d7a1fc575d3fe4161575a7cc833eb34de121fb8033d271e4753291c2010,PodSandboxId:1909b05bc2adf6c21400d292bb39843c7f02c4b29af017e9f6b9458b2c69881e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1757342290571937876,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-981850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b02738106d93bda29a76901af573e0ea,},Annotation
s:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8f733bdc-eed3-4d7d-afc0-cb1a2277bd1c name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 14:38:29 test-preload-981850 crio[840]: time="2025-09-08 14:38:29.824320993Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=8607fd62-6060-446a-a98b-0edf630db2d3 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 08 14:38:29 test-preload-981850 crio[840]: time="2025-09-08 14:38:29.825289879Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:354ae74c5e51e74929cc08d542ce8725b423d9096c5449bd320e8bd15056f237,Metadata:&PodSandboxMetadata{Name:coredns-668d6bf9bc-r5zjb,Uid:4af1d51b-008f-4ace-b14c-f544789ed8bb,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1757342296205108641,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-r5zjb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4af1d51b-008f-4ace-b14c-f544789ed8bb,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-08T14:38:14.478257140Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:442f3cb54fcc2188db3e468bafc6be8f713f8f5c7b134817546443c0803f07ef,Metadata:&PodSandboxMetadata{Name:kube-proxy-xkcwm,Uid:67c30a26-8f97-444b-9d01-cc66ae501725,Namespace:kube-system,A
ttempt:0,},State:SANDBOX_READY,CreatedAt:1757342294803074609,Labels:map[string]string{controller-revision-hash: 64b9dbc74b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-xkcwm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67c30a26-8f97-444b-9d01-cc66ae501725,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-08T14:38:14.478261717Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ce0fba4473f72119f0055621a290a62d65c1d2fa8c40850e186f1ba561750852,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:a823029c-84cf-4db6-8528-00f6e5fc4550,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1757342294797505291,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a823029c-84cf-4db6-8528-00f6
e5fc4550,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-09-08T14:38:14.478255202Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9709481e832f80ab02c6768b26235bf4b80de4723cb48876fea8fed663668b81,Metadata:&PodSandboxMetadata{Name:etcd-test-preload-981850,Uid:6c7bbbdd400306ea4
3a9c16d89e2fef4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1757342290375257751,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-test-preload-981850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c7bbbdd400306ea43a9c16d89e2fef4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.184:2379,kubernetes.io/config.hash: 6c7bbbdd400306ea43a9c16d89e2fef4,kubernetes.io/config.seen: 2025-09-08T14:38:09.566007055Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4d2b446a31863592cb486944c54e7e4c566b8427bca70468d0aeb0c1533d23cc,Metadata:&PodSandboxMetadata{Name:kube-apiserver-test-preload-981850,Uid:7c63d8117cfec59cd96a42ae52a6e10f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1757342290339836683,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-test-pr
eload-981850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c63d8117cfec59cd96a42ae52a6e10f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.184:8443,kubernetes.io/config.hash: 7c63d8117cfec59cd96a42ae52a6e10f,kubernetes.io/config.seen: 2025-09-08T14:38:09.487479528Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1909b05bc2adf6c21400d292bb39843c7f02c4b29af017e9f6b9458b2c69881e,Metadata:&PodSandboxMetadata{Name:kube-scheduler-test-preload-981850,Uid:b02738106d93bda29a76901af573e0ea,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1757342290336234183,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-test-preload-981850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b02738106d93bda29a76901af573e0ea,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b02738106d93bda
29a76901af573e0ea,kubernetes.io/config.seen: 2025-09-08T14:38:09.487485545Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1bdd512aaaebaf267bacfb135f3ad3045965fbe6ed59f69fb461651b44371809,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-test-preload-981850,Uid:a7a48ddf84f496c2c5def322a83c149c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1757342290335559091,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-test-preload-981850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7a48ddf84f496c2c5def322a83c149c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a7a48ddf84f496c2c5def322a83c149c,kubernetes.io/config.seen: 2025-09-08T14:38:09.487484445Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=8607fd62-6060-446a-a98b-0edf630db2d3 name=/runtime.v1.RuntimeService/ListPodSandbox

                                                
                                                
	Sep 08 14:38:29 test-preload-981850 crio[840]: time="2025-09-08 14:38:29.827142507Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e9a97efd-a237-4ddc-8796-6149ab23846c name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 14:38:29 test-preload-981850 crio[840]: time="2025-09-08 14:38:29.827373777Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e9a97efd-a237-4ddc-8796-6149ab23846c name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 14:38:29 test-preload-981850 crio[840]: time="2025-09-08 14:38:29.827588387Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7d64185eb09007fef310af288e997cdac5a819cf94e1abbdf109d999a80d9c48,PodSandboxId:354ae74c5e51e74929cc08d542ce8725b423d9096c5449bd320e8bd15056f237,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1757342296803182875,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-r5zjb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4af1d51b-008f-4ace-b14c-f544789ed8bb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5afb3746333e643e875a86318a9c30fe406942ef36b5f3a27e7b9896d36d22c6,PodSandboxId:442f3cb54fcc2188db3e468bafc6be8f713f8f5c7b134817546443c0803f07ef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1757342295127264255,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xkcwm,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 67c30a26-8f97-444b-9d01-cc66ae501725,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:015940b0a3acbc678810937b4628acd9e5bafd08188459b592858c4f9bbd433e,PodSandboxId:ce0fba4473f72119f0055621a290a62d65c1d2fa8c40850e186f1ba561750852,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1757342294995047846,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8
23029c-84cf-4db6-8528-00f6e5fc4550,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a81caa151975677f33e1746d21fe04a0da03381b970cd1a01080a802e6cd5c75,PodSandboxId:9709481e832f80ab02c6768b26235bf4b80de4723cb48876fea8fed663668b81,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1757342290678602229,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-981850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c7bbbdd400306ea43a9c16d89e2fef4,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8af164dea313df4f1276e37b3d3b200c5634668c133c5b97866c8fc801d18508,PodSandboxId:1bdd512aaaebaf267bacfb135f3ad3045965fbe6ed59f69fb461651b44371809,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1757342290645023173,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-981850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7a48ddf84f496c2c5def32
2a83c149c,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7824f9b0c37b4425ab56a976a47443c8ce28314b08bc5a25e8a4aa5f03ee447,PodSandboxId:4d2b446a31863592cb486944c54e7e4c566b8427bca70468d0aeb0c1533d23cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1757342290631487363,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-981850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c63d8117cfec59cd96a42ae52a6e10f,}
,Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef8b7d7a1fc575d3fe4161575a7cc833eb34de121fb8033d271e4753291c2010,PodSandboxId:1909b05bc2adf6c21400d292bb39843c7f02c4b29af017e9f6b9458b2c69881e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1757342290571937876,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-981850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b02738106d93bda29a76901af573e0ea,},Annotation
s:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e9a97efd-a237-4ddc-8796-6149ab23846c name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 14:38:29 test-preload-981850 crio[840]: time="2025-09-08 14:38:29.840878873Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e21fa6be-cc0e-44da-b66d-012e3af1e78d name=/runtime.v1.RuntimeService/Version
	Sep 08 14:38:29 test-preload-981850 crio[840]: time="2025-09-08 14:38:29.840973106Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e21fa6be-cc0e-44da-b66d-012e3af1e78d name=/runtime.v1.RuntimeService/Version
	Sep 08 14:38:29 test-preload-981850 crio[840]: time="2025-09-08 14:38:29.842361092Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=eef324d9-6921-4c71-b299-7ca480b8c6b1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 14:38:29 test-preload-981850 crio[840]: time="2025-09-08 14:38:29.843091532Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757342309843062355,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eef324d9-6921-4c71-b299-7ca480b8c6b1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 14:38:29 test-preload-981850 crio[840]: time="2025-09-08 14:38:29.844028941Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1f50bc6a-d659-4d23-8db4-b1781c80e6d0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 14:38:29 test-preload-981850 crio[840]: time="2025-09-08 14:38:29.844271052Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1f50bc6a-d659-4d23-8db4-b1781c80e6d0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 14:38:29 test-preload-981850 crio[840]: time="2025-09-08 14:38:29.844495457Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7d64185eb09007fef310af288e997cdac5a819cf94e1abbdf109d999a80d9c48,PodSandboxId:354ae74c5e51e74929cc08d542ce8725b423d9096c5449bd320e8bd15056f237,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1757342296803182875,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-r5zjb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4af1d51b-008f-4ace-b14c-f544789ed8bb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5afb3746333e643e875a86318a9c30fe406942ef36b5f3a27e7b9896d36d22c6,PodSandboxId:442f3cb54fcc2188db3e468bafc6be8f713f8f5c7b134817546443c0803f07ef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1757342295127264255,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xkcwm,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 67c30a26-8f97-444b-9d01-cc66ae501725,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:015940b0a3acbc678810937b4628acd9e5bafd08188459b592858c4f9bbd433e,PodSandboxId:ce0fba4473f72119f0055621a290a62d65c1d2fa8c40850e186f1ba561750852,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1757342294995047846,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8
23029c-84cf-4db6-8528-00f6e5fc4550,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a81caa151975677f33e1746d21fe04a0da03381b970cd1a01080a802e6cd5c75,PodSandboxId:9709481e832f80ab02c6768b26235bf4b80de4723cb48876fea8fed663668b81,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1757342290678602229,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-981850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c7bbbdd400306ea43a9c16d89e2fef4,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8af164dea313df4f1276e37b3d3b200c5634668c133c5b97866c8fc801d18508,PodSandboxId:1bdd512aaaebaf267bacfb135f3ad3045965fbe6ed59f69fb461651b44371809,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1757342290645023173,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-981850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7a48ddf84f496c2c5def32
2a83c149c,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7824f9b0c37b4425ab56a976a47443c8ce28314b08bc5a25e8a4aa5f03ee447,PodSandboxId:4d2b446a31863592cb486944c54e7e4c566b8427bca70468d0aeb0c1533d23cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1757342290631487363,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-981850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c63d8117cfec59cd96a42ae52a6e10f,}
,Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef8b7d7a1fc575d3fe4161575a7cc833eb34de121fb8033d271e4753291c2010,PodSandboxId:1909b05bc2adf6c21400d292bb39843c7f02c4b29af017e9f6b9458b2c69881e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1757342290571937876,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-981850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b02738106d93bda29a76901af573e0ea,},Annotation
s:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1f50bc6a-d659-4d23-8db4-b1781c80e6d0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7d64185eb0900       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   13 seconds ago      Running             coredns                   1                   354ae74c5e51e       coredns-668d6bf9bc-r5zjb
	5afb3746333e6       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   14 seconds ago      Running             kube-proxy                1                   442f3cb54fcc2       kube-proxy-xkcwm
	015940b0a3acb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago      Running             storage-provisioner       1                   ce0fba4473f72       storage-provisioner
	a81caa1519756       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   19 seconds ago      Running             etcd                      1                   9709481e832f8       etcd-test-preload-981850
	8af164dea313d       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   19 seconds ago      Running             kube-controller-manager   1                   1bdd512aaaeba       kube-controller-manager-test-preload-981850
	e7824f9b0c37b       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   19 seconds ago      Running             kube-apiserver            1                   4d2b446a31863       kube-apiserver-test-preload-981850
	ef8b7d7a1fc57       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   19 seconds ago      Running             kube-scheduler            1                   1909b05bc2adf       kube-scheduler-test-preload-981850
	
	
	==> coredns [7d64185eb09007fef310af288e997cdac5a819cf94e1abbdf109d999a80d9c48] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:59793 - 51042 "HINFO IN 794530963346120494.8091735899303613258. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.076913294s
	
	
	==> describe nodes <==
	Name:               test-preload-981850
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-981850
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3f6dd380c737091fd766d425b85ffa6c4f72b9ba
	                    minikube.k8s.io/name=test-preload-981850
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_08T14_36_39_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Sep 2025 14:36:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-981850
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Sep 2025 14:38:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Sep 2025 14:38:15 +0000   Mon, 08 Sep 2025 14:36:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Sep 2025 14:38:15 +0000   Mon, 08 Sep 2025 14:36:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Sep 2025 14:38:15 +0000   Mon, 08 Sep 2025 14:36:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Sep 2025 14:38:15 +0000   Mon, 08 Sep 2025 14:38:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.184
	  Hostname:    test-preload-981850
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 01c6d3e21d4342b2819bdcf74db8869b
	  System UUID:                01c6d3e2-1d43-42b2-819b-dcf74db8869b
	  Boot ID:                    ceb67023-ae62-4e99-a42d-f9fcf3d3b517
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-r5zjb                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     106s
	  kube-system                 etcd-test-preload-981850                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         111s
	  kube-system                 kube-apiserver-test-preload-981850             250m (12%)    0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-controller-manager-test-preload-981850    200m (10%)    0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-proxy-xkcwm                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-test-preload-981850             100m (5%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 103s                 kube-proxy       
	  Normal   Starting                 14s                  kube-proxy       
	  Normal   Starting                 118s                 kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  118s (x8 over 118s)  kubelet          Node test-preload-981850 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    118s (x8 over 118s)  kubelet          Node test-preload-981850 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     118s (x7 over 118s)  kubelet          Node test-preload-981850 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  118s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    111s                 kubelet          Node test-preload-981850 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  111s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  111s                 kubelet          Node test-preload-981850 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     111s                 kubelet          Node test-preload-981850 status is now: NodeHasSufficientPID
	  Normal   NodeReady                111s                 kubelet          Node test-preload-981850 status is now: NodeReady
	  Normal   Starting                 111s                 kubelet          Starting kubelet.
	  Normal   RegisteredNode           107s                 node-controller  Node test-preload-981850 event: Registered Node test-preload-981850 in Controller
	  Normal   Starting                 21s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  21s (x8 over 21s)    kubelet          Node test-preload-981850 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    21s (x8 over 21s)    kubelet          Node test-preload-981850 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     21s (x7 over 21s)    kubelet          Node test-preload-981850 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  21s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 17s                  kubelet          Node test-preload-981850 has been rebooted, boot id: ceb67023-ae62-4e99-a42d-f9fcf3d3b517
	  Normal   RegisteredNode           13s                  node-controller  Node test-preload-981850 event: Registered Node test-preload-981850 in Controller
	
	
	==> dmesg <==
	[Sep 8 14:37] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000053] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.006809] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.026352] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000015] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep 8 14:38] kauditd_printk_skb: 4 callbacks suppressed
	[  +0.103737] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.632800] kauditd_printk_skb: 205 callbacks suppressed
	[  +0.036908] kauditd_printk_skb: 203 callbacks suppressed
	
	
	==> etcd [a81caa151975677f33e1746d21fe04a0da03381b970cd1a01080a802e6cd5c75] <==
	{"level":"info","ts":"2025-09-08T14:38:11.253294Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"e6ef3f762f24aa4a","local-member-id":"989272a6374482ea","added-peer-id":"989272a6374482ea","added-peer-peer-urls":["https://192.168.39.184:2380"]}
	{"level":"info","ts":"2025-09-08T14:38:11.253437Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e6ef3f762f24aa4a","local-member-id":"989272a6374482ea","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-08T14:38:11.253485Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-08T14:38:11.256870Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-09-08T14:38:11.262400Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-09-08T14:38:11.265483Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"989272a6374482ea","initial-advertise-peer-urls":["https://192.168.39.184:2380"],"listen-peer-urls":["https://192.168.39.184:2380"],"advertise-client-urls":["https://192.168.39.184:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.184:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-09-08T14:38:11.266794Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-09-08T14:38:11.266937Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.184:2380"}
	{"level":"info","ts":"2025-09-08T14:38:11.266965Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.184:2380"}
	{"level":"info","ts":"2025-09-08T14:38:12.482316Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"989272a6374482ea is starting a new election at term 2"}
	{"level":"info","ts":"2025-09-08T14:38:12.482384Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"989272a6374482ea became pre-candidate at term 2"}
	{"level":"info","ts":"2025-09-08T14:38:12.482420Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"989272a6374482ea received MsgPreVoteResp from 989272a6374482ea at term 2"}
	{"level":"info","ts":"2025-09-08T14:38:12.482434Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"989272a6374482ea became candidate at term 3"}
	{"level":"info","ts":"2025-09-08T14:38:12.482457Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"989272a6374482ea received MsgVoteResp from 989272a6374482ea at term 3"}
	{"level":"info","ts":"2025-09-08T14:38:12.482466Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"989272a6374482ea became leader at term 3"}
	{"level":"info","ts":"2025-09-08T14:38:12.482473Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 989272a6374482ea elected leader 989272a6374482ea at term 3"}
	{"level":"info","ts":"2025-09-08T14:38:12.485943Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"989272a6374482ea","local-member-attributes":"{Name:test-preload-981850 ClientURLs:[https://192.168.39.184:2379]}","request-path":"/0/members/989272a6374482ea/attributes","cluster-id":"e6ef3f762f24aa4a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-09-08T14:38:12.485960Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-08T14:38:12.486125Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-08T14:38:12.486855Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-09-08T14:38:12.486912Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-09-08T14:38:12.487071Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-09-08T14:38:12.487419Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-09-08T14:38:12.487653Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.184:2379"}
	{"level":"info","ts":"2025-09-08T14:38:12.488165Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 14:38:30 up 0 min,  0 users,  load average: 0.93, 0.29, 0.10
	Linux test-preload-981850 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep  4 13:14:36 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [e7824f9b0c37b4425ab56a976a47443c8ce28314b08bc5a25e8a4aa5f03ee447] <==
	I0908 14:38:13.813880       1 shared_informer.go:320] Caches are synced for configmaps
	I0908 14:38:13.816899       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0908 14:38:13.817356       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0908 14:38:13.817641       1 aggregator.go:171] initial CRD sync complete...
	I0908 14:38:13.817671       1 autoregister_controller.go:144] Starting autoregister controller
	I0908 14:38:13.817688       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0908 14:38:13.817726       1 cache.go:39] Caches are synced for autoregister controller
	I0908 14:38:13.824554       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0908 14:38:13.824599       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0908 14:38:13.825575       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0908 14:38:13.827963       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0908 14:38:13.850632       1 shared_informer.go:320] Caches are synced for node_authorizer
	E0908 14:38:13.864442       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0908 14:38:13.870686       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0908 14:38:13.870854       1 policy_source.go:240] refreshing policies
	I0908 14:38:13.942563       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0908 14:38:14.561141       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0908 14:38:14.691039       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0908 14:38:15.396978       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0908 14:38:15.451671       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0908 14:38:15.511221       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0908 14:38:15.534971       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0908 14:38:17.316409       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0908 14:38:17.371673       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0908 14:38:17.414230       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [8af164dea313df4f1276e37b3d3b200c5634668c133c5b97866c8fc801d18508] <==
	I0908 14:38:17.073549       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0908 14:38:17.075677       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="test-preload-981850"
	I0908 14:38:17.076060       1 shared_informer.go:320] Caches are synced for garbage collector
	I0908 14:38:17.076272       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0908 14:38:17.076298       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0908 14:38:17.077696       1 shared_informer.go:320] Caches are synced for GC
	I0908 14:38:17.078066       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0908 14:38:17.078606       1 shared_informer.go:320] Caches are synced for garbage collector
	I0908 14:38:17.081081       1 shared_informer.go:320] Caches are synced for service account
	I0908 14:38:17.083020       1 shared_informer.go:320] Caches are synced for expand
	I0908 14:38:17.090159       1 shared_informer.go:320] Caches are synced for job
	I0908 14:38:17.103831       1 shared_informer.go:320] Caches are synced for PVC protection
	I0908 14:38:17.111059       1 shared_informer.go:320] Caches are synced for HPA
	I0908 14:38:17.113450       1 shared_informer.go:320] Caches are synced for endpoint
	I0908 14:38:17.118132       1 shared_informer.go:320] Caches are synced for disruption
	I0908 14:38:17.119474       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0908 14:38:17.129905       1 shared_informer.go:320] Caches are synced for resource quota
	I0908 14:38:17.129940       1 shared_informer.go:320] Caches are synced for daemon sets
	I0908 14:38:17.129982       1 shared_informer.go:320] Caches are synced for resource quota
	I0908 14:38:17.135423       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0908 14:38:17.382822       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="317.540734ms"
	I0908 14:38:17.383249       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="142.479µs"
	I0908 14:38:17.803482       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="99.308µs"
	I0908 14:38:18.181158       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="12.08369ms"
	I0908 14:38:18.181423       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="64.693µs"
	
	
	==> kube-proxy [5afb3746333e643e875a86318a9c30fe406942ef36b5f3a27e7b9896d36d22c6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0908 14:38:15.456126       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0908 14:38:15.477370       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.184"]
	E0908 14:38:15.477672       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 14:38:15.612441       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0908 14:38:15.612491       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0908 14:38:15.612515       1 server_linux.go:170] "Using iptables Proxier"
	I0908 14:38:15.622936       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 14:38:15.624571       1 server.go:497] "Version info" version="v1.32.0"
	I0908 14:38:15.624815       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 14:38:15.628374       1 config.go:199] "Starting service config controller"
	I0908 14:38:15.628423       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0908 14:38:15.628448       1 config.go:105] "Starting endpoint slice config controller"
	I0908 14:38:15.628452       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0908 14:38:15.628999       1 config.go:329] "Starting node config controller"
	I0908 14:38:15.629006       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0908 14:38:15.729364       1 shared_informer.go:320] Caches are synced for node config
	I0908 14:38:15.729416       1 shared_informer.go:320] Caches are synced for service config
	I0908 14:38:15.729426       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [ef8b7d7a1fc575d3fe4161575a7cc833eb34de121fb8033d271e4753291c2010] <==
	I0908 14:38:11.484205       1 serving.go:386] Generated self-signed cert in-memory
	W0908 14:38:13.747973       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0908 14:38:13.748079       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0908 14:38:13.748090       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0908 14:38:13.748139       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0908 14:38:13.840611       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
	I0908 14:38:13.840662       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 14:38:13.855579       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 14:38:13.855641       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0908 14:38:13.856055       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0908 14:38:13.856159       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0908 14:38:13.966030       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 08 14:38:13 test-preload-981850 kubelet[1170]: I0908 14:38:13.931549    1170 kubelet_node_status.go:125] "Node was previously registered" node="test-preload-981850"
	Sep 08 14:38:13 test-preload-981850 kubelet[1170]: I0908 14:38:13.931638    1170 kubelet_node_status.go:79] "Successfully registered node" node="test-preload-981850"
	Sep 08 14:38:13 test-preload-981850 kubelet[1170]: I0908 14:38:13.931662    1170 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 08 14:38:13 test-preload-981850 kubelet[1170]: I0908 14:38:13.932497    1170 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 08 14:38:13 test-preload-981850 kubelet[1170]: I0908 14:38:13.935031    1170 setters.go:602] "Node became not ready" node="test-preload-981850" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-08T14:38:13Z","lastTransitionTime":"2025-09-08T14:38:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"}
	Sep 08 14:38:13 test-preload-981850 kubelet[1170]: E0908 14:38:13.957956    1170 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-test-preload-981850\" already exists" pod="kube-system/etcd-test-preload-981850"
	Sep 08 14:38:13 test-preload-981850 kubelet[1170]: I0908 14:38:13.957984    1170 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-test-preload-981850"
	Sep 08 14:38:13 test-preload-981850 kubelet[1170]: E0908 14:38:13.981024    1170 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-test-preload-981850\" already exists" pod="kube-system/kube-apiserver-test-preload-981850"
	Sep 08 14:38:14 test-preload-981850 kubelet[1170]: I0908 14:38:14.474642    1170 apiserver.go:52] "Watching apiserver"
	Sep 08 14:38:14 test-preload-981850 kubelet[1170]: E0908 14:38:14.482573    1170 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-r5zjb" podUID="4af1d51b-008f-4ace-b14c-f544789ed8bb"
	Sep 08 14:38:14 test-preload-981850 kubelet[1170]: I0908 14:38:14.493604    1170 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Sep 08 14:38:14 test-preload-981850 kubelet[1170]: I0908 14:38:14.553345    1170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/67c30a26-8f97-444b-9d01-cc66ae501725-lib-modules\") pod \"kube-proxy-xkcwm\" (UID: \"67c30a26-8f97-444b-9d01-cc66ae501725\") " pod="kube-system/kube-proxy-xkcwm"
	Sep 08 14:38:14 test-preload-981850 kubelet[1170]: I0908 14:38:14.553439    1170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/67c30a26-8f97-444b-9d01-cc66ae501725-xtables-lock\") pod \"kube-proxy-xkcwm\" (UID: \"67c30a26-8f97-444b-9d01-cc66ae501725\") " pod="kube-system/kube-proxy-xkcwm"
	Sep 08 14:38:14 test-preload-981850 kubelet[1170]: I0908 14:38:14.553467    1170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a823029c-84cf-4db6-8528-00f6e5fc4550-tmp\") pod \"storage-provisioner\" (UID: \"a823029c-84cf-4db6-8528-00f6e5fc4550\") " pod="kube-system/storage-provisioner"
	Sep 08 14:38:14 test-preload-981850 kubelet[1170]: E0908 14:38:14.554107    1170 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 08 14:38:14 test-preload-981850 kubelet[1170]: E0908 14:38:14.554201    1170 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4af1d51b-008f-4ace-b14c-f544789ed8bb-config-volume podName:4af1d51b-008f-4ace-b14c-f544789ed8bb nodeName:}" failed. No retries permitted until 2025-09-08 14:38:15.054179927 +0000 UTC m=+5.687281684 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4af1d51b-008f-4ace-b14c-f544789ed8bb-config-volume") pod "coredns-668d6bf9bc-r5zjb" (UID: "4af1d51b-008f-4ace-b14c-f544789ed8bb") : object "kube-system"/"coredns" not registered
	Sep 08 14:38:14 test-preload-981850 kubelet[1170]: I0908 14:38:14.726165    1170 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-test-preload-981850"
	Sep 08 14:38:14 test-preload-981850 kubelet[1170]: E0908 14:38:14.736659    1170 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-test-preload-981850\" already exists" pod="kube-system/kube-apiserver-test-preload-981850"
	Sep 08 14:38:15 test-preload-981850 kubelet[1170]: E0908 14:38:15.059451    1170 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 08 14:38:15 test-preload-981850 kubelet[1170]: E0908 14:38:15.059519    1170 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4af1d51b-008f-4ace-b14c-f544789ed8bb-config-volume podName:4af1d51b-008f-4ace-b14c-f544789ed8bb nodeName:}" failed. No retries permitted until 2025-09-08 14:38:16.059506217 +0000 UTC m=+6.692607986 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4af1d51b-008f-4ace-b14c-f544789ed8bb-config-volume") pod "coredns-668d6bf9bc-r5zjb" (UID: "4af1d51b-008f-4ace-b14c-f544789ed8bb") : object "kube-system"/"coredns" not registered
	Sep 08 14:38:15 test-preload-981850 kubelet[1170]: I0908 14:38:15.530458    1170 kubelet_node_status.go:502] "Fast updating node status as it just became ready"
	Sep 08 14:38:19 test-preload-981850 kubelet[1170]: E0908 14:38:19.548394    1170 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757342299547920148,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 08 14:38:19 test-preload-981850 kubelet[1170]: E0908 14:38:19.548419    1170 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757342299547920148,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 08 14:38:29 test-preload-981850 kubelet[1170]: E0908 14:38:29.550345    1170 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757342309549633973,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 08 14:38:29 test-preload-981850 kubelet[1170]: E0908 14:38:29.550372    1170 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757342309549633973,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [015940b0a3acbc678810937b4628acd9e5bafd08188459b592858c4f9bbd433e] <==
	I0908 14:38:15.176122       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-981850 -n test-preload-981850
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-981850 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-981850" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-981850
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-981850: (1.075980946s)
--- FAIL: TestPreload (170.42s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (89.4s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-120061 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-120061 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m22.091843725s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-120061] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21508
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21508-1116714/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-1116714/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-120061" primary control-plane node in "pause-120061" cluster
	* Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-120061" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 14:46:55.153791 1161261 out.go:360] Setting OutFile to fd 1 ...
	I0908 14:46:55.154130 1161261 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 14:46:55.154147 1161261 out.go:374] Setting ErrFile to fd 2...
	I0908 14:46:55.154152 1161261 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 14:46:55.154363 1161261 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-1116714/.minikube/bin
	I0908 14:46:55.155051 1161261 out.go:368] Setting JSON to false
	I0908 14:46:55.156321 1161261 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":19759,"bootTime":1757323056,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 14:46:55.156412 1161261 start.go:140] virtualization: kvm guest
	I0908 14:46:55.158659 1161261 out.go:179] * [pause-120061] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0908 14:46:55.160459 1161261 out.go:179]   - MINIKUBE_LOCATION=21508
	I0908 14:46:55.160551 1161261 notify.go:220] Checking for updates...
	I0908 14:46:55.163190 1161261 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 14:46:55.164618 1161261 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21508-1116714/kubeconfig
	I0908 14:46:55.166460 1161261 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-1116714/.minikube
	I0908 14:46:55.167976 1161261 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 14:46:55.169385 1161261 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 14:46:55.171419 1161261 config.go:182] Loaded profile config "pause-120061": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 14:46:55.172336 1161261 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 14:46:55.172531 1161261 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 14:46:55.199217 1161261 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44289
	I0908 14:46:55.199885 1161261 main.go:141] libmachine: () Calling .GetVersion
	I0908 14:46:55.200572 1161261 main.go:141] libmachine: Using API Version  1
	I0908 14:46:55.200601 1161261 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 14:46:55.201228 1161261 main.go:141] libmachine: () Calling .GetMachineName
	I0908 14:46:55.201593 1161261 main.go:141] libmachine: (pause-120061) Calling .DriverName
	I0908 14:46:55.201906 1161261 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 14:46:55.202316 1161261 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 14:46:55.202373 1161261 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 14:46:55.220533 1161261 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43643
	I0908 14:46:55.221445 1161261 main.go:141] libmachine: () Calling .GetVersion
	I0908 14:46:55.222184 1161261 main.go:141] libmachine: Using API Version  1
	I0908 14:46:55.222222 1161261 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 14:46:55.222693 1161261 main.go:141] libmachine: () Calling .GetMachineName
	I0908 14:46:55.222927 1161261 main.go:141] libmachine: (pause-120061) Calling .DriverName
	I0908 14:46:55.269227 1161261 out.go:179] * Using the kvm2 driver based on existing profile
	I0908 14:46:55.270394 1161261 start.go:304] selected driver: kvm2
	I0908 14:46:55.270422 1161261 start.go:918] validating driver "kvm2" against &{Name:pause-120061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.0 ClusterName:pause-120061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.147 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-insta
ller:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 14:46:55.270677 1161261 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 14:46:55.271241 1161261 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 14:46:55.271362 1161261 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21508-1116714/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0908 14:46:55.292429 1161261 install.go:137] /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0908 14:46:55.293500 1161261 cni.go:84] Creating CNI manager for ""
	I0908 14:46:55.293569 1161261 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0908 14:46:55.293659 1161261 start.go:348] cluster config:
	{Name:pause-120061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:pause-120061 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.147 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false
portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 14:46:55.293878 1161261 iso.go:125] acquiring lock: {Name:mk347390bf24761f2c39bf4cd5b718f157a50faa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 14:46:55.295765 1161261 out.go:179] * Starting "pause-120061" primary control-plane node in "pause-120061" cluster
	I0908 14:46:55.296905 1161261 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 14:46:55.296958 1161261 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0908 14:46:55.296968 1161261 cache.go:58] Caching tarball of preloaded images
	I0908 14:46:55.297100 1161261 preload.go:172] Found /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0908 14:46:55.297116 1161261 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0908 14:46:55.297310 1161261 profile.go:143] Saving config to /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/pause-120061/config.json ...
	I0908 14:46:55.297598 1161261 start.go:360] acquireMachinesLock for pause-120061: {Name:mk0626ae9b324aeb819357e3de70b05b9e4c30a5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0908 14:47:38.934186 1161261 start.go:364] duration metric: took 43.636529867s to acquireMachinesLock for "pause-120061"
	I0908 14:47:38.934281 1161261 start.go:96] Skipping create...Using existing machine configuration
	I0908 14:47:38.934293 1161261 fix.go:54] fixHost starting: 
	I0908 14:47:38.934795 1161261 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 14:47:38.934865 1161261 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 14:47:38.953899 1161261 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33119
	I0908 14:47:38.954585 1161261 main.go:141] libmachine: () Calling .GetVersion
	I0908 14:47:38.955180 1161261 main.go:141] libmachine: Using API Version  1
	I0908 14:47:38.955214 1161261 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 14:47:38.955734 1161261 main.go:141] libmachine: () Calling .GetMachineName
	I0908 14:47:38.955978 1161261 main.go:141] libmachine: (pause-120061) Calling .DriverName
	I0908 14:47:38.956209 1161261 main.go:141] libmachine: (pause-120061) Calling .GetState
	I0908 14:47:38.958177 1161261 fix.go:112] recreateIfNeeded on pause-120061: state=Running err=<nil>
	W0908 14:47:38.958231 1161261 fix.go:138] unexpected machine state, will restart: <nil>
	I0908 14:47:38.960278 1161261 out.go:252] * Updating the running kvm2 "pause-120061" VM ...
	I0908 14:47:38.960324 1161261 machine.go:93] provisionDockerMachine start ...
	I0908 14:47:38.960364 1161261 main.go:141] libmachine: (pause-120061) Calling .DriverName
	I0908 14:47:38.960695 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHHostname
	I0908 14:47:38.964020 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:38.964583 1161261 main.go:141] libmachine: (pause-120061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:49:28", ip: ""} in network mk-pause-120061: {Iface:virbr1 ExpiryTime:2025-09-08 15:45:41 +0000 UTC Type:0 Mac:52:54:00:a0:49:28 Iaid: IPaddr:192.168.61.147 Prefix:24 Hostname:pause-120061 Clientid:01:52:54:00:a0:49:28}
	I0908 14:47:38.964624 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined IP address 192.168.61.147 and MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:38.964874 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHPort
	I0908 14:47:38.965165 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHKeyPath
	I0908 14:47:38.965375 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHKeyPath
	I0908 14:47:38.965541 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHUsername
	I0908 14:47:38.965701 1161261 main.go:141] libmachine: Using SSH client type: native
	I0908 14:47:38.966030 1161261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.61.147 22 <nil> <nil>}
	I0908 14:47:38.966048 1161261 main.go:141] libmachine: About to run SSH command:
	hostname
	I0908 14:47:39.087038 1161261 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-120061
	
	I0908 14:47:39.087094 1161261 main.go:141] libmachine: (pause-120061) Calling .GetMachineName
	I0908 14:47:39.087412 1161261 buildroot.go:166] provisioning hostname "pause-120061"
	I0908 14:47:39.087435 1161261 main.go:141] libmachine: (pause-120061) Calling .GetMachineName
	I0908 14:47:39.087596 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHHostname
	I0908 14:47:39.091091 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:39.091719 1161261 main.go:141] libmachine: (pause-120061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:49:28", ip: ""} in network mk-pause-120061: {Iface:virbr1 ExpiryTime:2025-09-08 15:45:41 +0000 UTC Type:0 Mac:52:54:00:a0:49:28 Iaid: IPaddr:192.168.61.147 Prefix:24 Hostname:pause-120061 Clientid:01:52:54:00:a0:49:28}
	I0908 14:47:39.091743 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined IP address 192.168.61.147 and MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:39.092016 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHPort
	I0908 14:47:39.092297 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHKeyPath
	I0908 14:47:39.092524 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHKeyPath
	I0908 14:47:39.092745 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHUsername
	I0908 14:47:39.092990 1161261 main.go:141] libmachine: Using SSH client type: native
	I0908 14:47:39.093266 1161261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.61.147 22 <nil> <nil>}
	I0908 14:47:39.093281 1161261 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-120061 && echo "pause-120061" | sudo tee /etc/hostname
	I0908 14:47:39.231080 1161261 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-120061
	
	I0908 14:47:39.231115 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHHostname
	I0908 14:47:39.234280 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:39.234692 1161261 main.go:141] libmachine: (pause-120061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:49:28", ip: ""} in network mk-pause-120061: {Iface:virbr1 ExpiryTime:2025-09-08 15:45:41 +0000 UTC Type:0 Mac:52:54:00:a0:49:28 Iaid: IPaddr:192.168.61.147 Prefix:24 Hostname:pause-120061 Clientid:01:52:54:00:a0:49:28}
	I0908 14:47:39.234735 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined IP address 192.168.61.147 and MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:39.234995 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHPort
	I0908 14:47:39.235241 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHKeyPath
	I0908 14:47:39.235419 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHKeyPath
	I0908 14:47:39.235543 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHUsername
	I0908 14:47:39.235743 1161261 main.go:141] libmachine: Using SSH client type: native
	I0908 14:47:39.235953 1161261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.61.147 22 <nil> <nil>}
	I0908 14:47:39.235969 1161261 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-120061' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-120061/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-120061' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 14:47:39.358526 1161261 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 14:47:39.358561 1161261 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21508-1116714/.minikube CaCertPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21508-1116714/.minikube}
	I0908 14:47:39.358630 1161261 buildroot.go:174] setting up certificates
	I0908 14:47:39.358646 1161261 provision.go:84] configureAuth start
	I0908 14:47:39.358662 1161261 main.go:141] libmachine: (pause-120061) Calling .GetMachineName
	I0908 14:47:39.359057 1161261 main.go:141] libmachine: (pause-120061) Calling .GetIP
	I0908 14:47:39.362365 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:39.362831 1161261 main.go:141] libmachine: (pause-120061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:49:28", ip: ""} in network mk-pause-120061: {Iface:virbr1 ExpiryTime:2025-09-08 15:45:41 +0000 UTC Type:0 Mac:52:54:00:a0:49:28 Iaid: IPaddr:192.168.61.147 Prefix:24 Hostname:pause-120061 Clientid:01:52:54:00:a0:49:28}
	I0908 14:47:39.362858 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined IP address 192.168.61.147 and MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:39.362995 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHHostname
	I0908 14:47:39.366014 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:39.366565 1161261 main.go:141] libmachine: (pause-120061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:49:28", ip: ""} in network mk-pause-120061: {Iface:virbr1 ExpiryTime:2025-09-08 15:45:41 +0000 UTC Type:0 Mac:52:54:00:a0:49:28 Iaid: IPaddr:192.168.61.147 Prefix:24 Hostname:pause-120061 Clientid:01:52:54:00:a0:49:28}
	I0908 14:47:39.366609 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined IP address 192.168.61.147 and MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:39.366788 1161261 provision.go:143] copyHostCerts
	I0908 14:47:39.366878 1161261 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-1116714/.minikube/ca.pem, removing ...
	I0908 14:47:39.366900 1161261 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-1116714/.minikube/ca.pem
	I0908 14:47:39.366971 1161261 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21508-1116714/.minikube/ca.pem (1082 bytes)
	I0908 14:47:39.367120 1161261 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-1116714/.minikube/cert.pem, removing ...
	I0908 14:47:39.367134 1161261 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-1116714/.minikube/cert.pem
	I0908 14:47:39.367165 1161261 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21508-1116714/.minikube/cert.pem (1123 bytes)
	I0908 14:47:39.367258 1161261 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-1116714/.minikube/key.pem, removing ...
	I0908 14:47:39.367269 1161261 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-1116714/.minikube/key.pem
	I0908 14:47:39.367297 1161261 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21508-1116714/.minikube/key.pem (1675 bytes)
	I0908 14:47:39.367390 1161261 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca-key.pem org=jenkins.pause-120061 san=[127.0.0.1 192.168.61.147 localhost minikube pause-120061]
	I0908 14:47:39.573674 1161261 provision.go:177] copyRemoteCerts
	I0908 14:47:39.573751 1161261 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 14:47:39.573781 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHHostname
	I0908 14:47:39.577127 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:39.577650 1161261 main.go:141] libmachine: (pause-120061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:49:28", ip: ""} in network mk-pause-120061: {Iface:virbr1 ExpiryTime:2025-09-08 15:45:41 +0000 UTC Type:0 Mac:52:54:00:a0:49:28 Iaid: IPaddr:192.168.61.147 Prefix:24 Hostname:pause-120061 Clientid:01:52:54:00:a0:49:28}
	I0908 14:47:39.577687 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined IP address 192.168.61.147 and MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:39.577836 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHPort
	I0908 14:47:39.578123 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHKeyPath
	I0908 14:47:39.578302 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHUsername
	I0908 14:47:39.578501 1161261 sshutil.go:53] new ssh client: &{IP:192.168.61.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/pause-120061/id_rsa Username:docker}
	I0908 14:47:39.678101 1161261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0908 14:47:39.716835 1161261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0908 14:47:39.765726 1161261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0908 14:47:39.813075 1161261 provision.go:87] duration metric: took 454.409899ms to configureAuth
	I0908 14:47:39.813115 1161261 buildroot.go:189] setting minikube options for container-runtime
	I0908 14:47:39.813416 1161261 config.go:182] Loaded profile config "pause-120061": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 14:47:39.813522 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHHostname
	I0908 14:47:39.816873 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:39.817323 1161261 main.go:141] libmachine: (pause-120061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:49:28", ip: ""} in network mk-pause-120061: {Iface:virbr1 ExpiryTime:2025-09-08 15:45:41 +0000 UTC Type:0 Mac:52:54:00:a0:49:28 Iaid: IPaddr:192.168.61.147 Prefix:24 Hostname:pause-120061 Clientid:01:52:54:00:a0:49:28}
	I0908 14:47:39.817356 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined IP address 192.168.61.147 and MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:39.817651 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHPort
	I0908 14:47:39.817919 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHKeyPath
	I0908 14:47:39.818144 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHKeyPath
	I0908 14:47:39.818328 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHUsername
	I0908 14:47:39.818555 1161261 main.go:141] libmachine: Using SSH client type: native
	I0908 14:47:39.818896 1161261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.61.147 22 <nil> <nil>}
	I0908 14:47:39.818913 1161261 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0908 14:47:45.543761 1161261 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0908 14:47:45.543805 1161261 machine.go:96] duration metric: took 6.583470839s to provisionDockerMachine
	I0908 14:47:45.543824 1161261 start.go:293] postStartSetup for "pause-120061" (driver="kvm2")
	I0908 14:47:45.543839 1161261 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 14:47:45.543865 1161261 main.go:141] libmachine: (pause-120061) Calling .DriverName
	I0908 14:47:45.544268 1161261 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 14:47:45.544299 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHHostname
	I0908 14:47:45.548239 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:45.548620 1161261 main.go:141] libmachine: (pause-120061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:49:28", ip: ""} in network mk-pause-120061: {Iface:virbr1 ExpiryTime:2025-09-08 15:45:41 +0000 UTC Type:0 Mac:52:54:00:a0:49:28 Iaid: IPaddr:192.168.61.147 Prefix:24 Hostname:pause-120061 Clientid:01:52:54:00:a0:49:28}
	I0908 14:47:45.548665 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined IP address 192.168.61.147 and MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:45.548918 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHPort
	I0908 14:47:45.549128 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHKeyPath
	I0908 14:47:45.549315 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHUsername
	I0908 14:47:45.549481 1161261 sshutil.go:53] new ssh client: &{IP:192.168.61.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/pause-120061/id_rsa Username:docker}
	I0908 14:47:45.651211 1161261 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 14:47:45.658742 1161261 info.go:137] Remote host: Buildroot 2025.02
	I0908 14:47:45.658788 1161261 filesync.go:126] Scanning /home/jenkins/minikube-integration/21508-1116714/.minikube/addons for local assets ...
	I0908 14:47:45.658868 1161261 filesync.go:126] Scanning /home/jenkins/minikube-integration/21508-1116714/.minikube/files for local assets ...
	I0908 14:47:45.658969 1161261 filesync.go:149] local asset: /home/jenkins/minikube-integration/21508-1116714/.minikube/files/etc/ssl/certs/11208752.pem -> 11208752.pem in /etc/ssl/certs
	I0908 14:47:45.659097 1161261 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0908 14:47:45.676039 1161261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/files/etc/ssl/certs/11208752.pem --> /etc/ssl/certs/11208752.pem (1708 bytes)
	I0908 14:47:45.724138 1161261 start.go:296] duration metric: took 180.282144ms for postStartSetup
	I0908 14:47:45.724193 1161261 fix.go:56] duration metric: took 6.789899375s for fixHost
	I0908 14:47:45.724223 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHHostname
	I0908 14:47:45.727807 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:45.728227 1161261 main.go:141] libmachine: (pause-120061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:49:28", ip: ""} in network mk-pause-120061: {Iface:virbr1 ExpiryTime:2025-09-08 15:45:41 +0000 UTC Type:0 Mac:52:54:00:a0:49:28 Iaid: IPaddr:192.168.61.147 Prefix:24 Hostname:pause-120061 Clientid:01:52:54:00:a0:49:28}
	I0908 14:47:45.728256 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined IP address 192.168.61.147 and MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:45.728609 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHPort
	I0908 14:47:45.728821 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHKeyPath
	I0908 14:47:45.728957 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHKeyPath
	I0908 14:47:45.729071 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHUsername
	I0908 14:47:45.729234 1161261 main.go:141] libmachine: Using SSH client type: native
	I0908 14:47:45.729638 1161261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.61.147 22 <nil> <nil>}
	I0908 14:47:45.729654 1161261 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0908 14:47:45.846172 1161261 main.go:141] libmachine: SSH cmd err, output: <nil>: 1757342865.843199249
	
	I0908 14:47:45.846208 1161261 fix.go:216] guest clock: 1757342865.843199249
	I0908 14:47:45.846220 1161261 fix.go:229] Guest: 2025-09-08 14:47:45.843199249 +0000 UTC Remote: 2025-09-08 14:47:45.724198252 +0000 UTC m=+50.631490013 (delta=119.000997ms)
	I0908 14:47:45.846246 1161261 fix.go:200] guest clock delta is within tolerance: 119.000997ms
	I0908 14:47:45.846254 1161261 start.go:83] releasing machines lock for "pause-120061", held for 6.912017635s
	I0908 14:47:45.846294 1161261 main.go:141] libmachine: (pause-120061) Calling .DriverName
	I0908 14:47:45.846620 1161261 main.go:141] libmachine: (pause-120061) Calling .GetIP
	I0908 14:47:45.849936 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:45.850359 1161261 main.go:141] libmachine: (pause-120061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:49:28", ip: ""} in network mk-pause-120061: {Iface:virbr1 ExpiryTime:2025-09-08 15:45:41 +0000 UTC Type:0 Mac:52:54:00:a0:49:28 Iaid: IPaddr:192.168.61.147 Prefix:24 Hostname:pause-120061 Clientid:01:52:54:00:a0:49:28}
	I0908 14:47:45.850429 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined IP address 192.168.61.147 and MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:45.850680 1161261 main.go:141] libmachine: (pause-120061) Calling .DriverName
	I0908 14:47:45.851390 1161261 main.go:141] libmachine: (pause-120061) Calling .DriverName
	I0908 14:47:45.851623 1161261 main.go:141] libmachine: (pause-120061) Calling .DriverName
	I0908 14:47:45.851760 1161261 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0908 14:47:45.851826 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHHostname
	I0908 14:47:45.851903 1161261 ssh_runner.go:195] Run: cat /version.json
	I0908 14:47:45.851933 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHHostname
	I0908 14:47:45.855883 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:45.856051 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:45.856613 1161261 main.go:141] libmachine: (pause-120061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:49:28", ip: ""} in network mk-pause-120061: {Iface:virbr1 ExpiryTime:2025-09-08 15:45:41 +0000 UTC Type:0 Mac:52:54:00:a0:49:28 Iaid: IPaddr:192.168.61.147 Prefix:24 Hostname:pause-120061 Clientid:01:52:54:00:a0:49:28}
	I0908 14:47:45.856683 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined IP address 192.168.61.147 and MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:45.856713 1161261 main.go:141] libmachine: (pause-120061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:49:28", ip: ""} in network mk-pause-120061: {Iface:virbr1 ExpiryTime:2025-09-08 15:45:41 +0000 UTC Type:0 Mac:52:54:00:a0:49:28 Iaid: IPaddr:192.168.61.147 Prefix:24 Hostname:pause-120061 Clientid:01:52:54:00:a0:49:28}
	I0908 14:47:45.856755 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined IP address 192.168.61.147 and MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:45.857042 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHPort
	I0908 14:47:45.857146 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHPort
	I0908 14:47:45.857256 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHKeyPath
	I0908 14:47:45.857456 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHKeyPath
	I0908 14:47:45.857469 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHUsername
	I0908 14:47:45.857681 1161261 sshutil.go:53] new ssh client: &{IP:192.168.61.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/pause-120061/id_rsa Username:docker}
	I0908 14:47:45.858044 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHUsername
	I0908 14:47:45.858209 1161261 sshutil.go:53] new ssh client: &{IP:192.168.61.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/pause-120061/id_rsa Username:docker}
	I0908 14:47:45.984024 1161261 ssh_runner.go:195] Run: systemctl --version
	I0908 14:47:45.994417 1161261 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0908 14:47:46.189541 1161261 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0908 14:47:46.205243 1161261 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0908 14:47:46.205348 1161261 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 14:47:46.225389 1161261 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0908 14:47:46.225428 1161261 start.go:495] detecting cgroup driver to use...
	I0908 14:47:46.225519 1161261 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0908 14:47:46.259747 1161261 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 14:47:46.288963 1161261 docker.go:218] disabling cri-docker service (if available) ...
	I0908 14:47:46.289158 1161261 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0908 14:47:46.320181 1161261 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0908 14:47:46.347824 1161261 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0908 14:47:46.556387 1161261 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0908 14:47:46.797576 1161261 docker.go:234] disabling docker service ...
	I0908 14:47:46.797675 1161261 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0908 14:47:46.847535 1161261 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0908 14:47:46.878193 1161261 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0908 14:47:47.161555 1161261 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0908 14:47:47.442372 1161261 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 14:47:47.462302 1161261 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 14:47:47.492084 1161261 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0908 14:47:47.492176 1161261 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 14:47:47.508165 1161261 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0908 14:47:47.508295 1161261 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 14:47:47.528597 1161261 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 14:47:47.546925 1161261 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 14:47:47.563039 1161261 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 14:47:47.583391 1161261 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 14:47:47.598701 1161261 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 14:47:47.619434 1161261 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 14:47:47.641052 1161261 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 14:47:47.654092 1161261 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 14:47:47.668357 1161261 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 14:47:47.985180 1161261 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0908 14:47:51.484903 1161261 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.499673595s)
	I0908 14:47:51.484943 1161261 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0908 14:47:51.485020 1161261 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0908 14:47:51.491847 1161261 start.go:563] Will wait 60s for crictl version
	I0908 14:47:51.491926 1161261 ssh_runner.go:195] Run: which crictl
	I0908 14:47:51.497807 1161261 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 14:47:51.555525 1161261 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0908 14:47:51.555677 1161261 ssh_runner.go:195] Run: crio --version
	I0908 14:47:51.590312 1161261 ssh_runner.go:195] Run: crio --version
	I0908 14:47:51.637110 1161261 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	I0908 14:47:51.638446 1161261 main.go:141] libmachine: (pause-120061) Calling .GetIP
	I0908 14:47:51.642263 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:51.642744 1161261 main.go:141] libmachine: (pause-120061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:49:28", ip: ""} in network mk-pause-120061: {Iface:virbr1 ExpiryTime:2025-09-08 15:45:41 +0000 UTC Type:0 Mac:52:54:00:a0:49:28 Iaid: IPaddr:192.168.61.147 Prefix:24 Hostname:pause-120061 Clientid:01:52:54:00:a0:49:28}
	I0908 14:47:51.642776 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined IP address 192.168.61.147 and MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:51.643169 1161261 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0908 14:47:51.649711 1161261 kubeadm.go:875] updating cluster {Name:pause-120061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0
ClusterName:pause-120061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.147 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0908 14:47:51.649917 1161261 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 14:47:51.649988 1161261 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 14:47:51.704103 1161261 crio.go:514] all images are preloaded for cri-o runtime.
	I0908 14:47:51.704142 1161261 crio.go:433] Images already preloaded, skipping extraction
	I0908 14:47:51.704223 1161261 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 14:47:51.748253 1161261 crio.go:514] all images are preloaded for cri-o runtime.
	I0908 14:47:51.748292 1161261 cache_images.go:85] Images are preloaded, skipping loading
	I0908 14:47:51.748303 1161261 kubeadm.go:926] updating node { 192.168.61.147 8443 v1.34.0 crio true true} ...
	I0908 14:47:51.748454 1161261 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-120061 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.147
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:pause-120061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0908 14:47:51.748544 1161261 ssh_runner.go:195] Run: crio config
	I0908 14:47:51.824864 1161261 cni.go:84] Creating CNI manager for ""
	I0908 14:47:51.824905 1161261 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0908 14:47:51.824923 1161261 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0908 14:47:51.824965 1161261 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.147 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-120061 NodeName:pause-120061 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.147"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.147 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0908 14:47:51.825192 1161261 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.147
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-120061"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.147"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.147"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0908 14:47:51.825283 1161261 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0908 14:47:51.846600 1161261 binaries.go:44] Found k8s binaries, skipping transfer
	I0908 14:47:51.846699 1161261 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0908 14:47:51.862367 1161261 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0908 14:47:51.890754 1161261 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0908 14:47:51.921238 1161261 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I0908 14:47:51.949413 1161261 ssh_runner.go:195] Run: grep 192.168.61.147	control-plane.minikube.internal$ /etc/hosts
	I0908 14:47:51.955910 1161261 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 14:47:52.155633 1161261 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 14:47:52.176352 1161261 certs.go:68] Setting up /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/pause-120061 for IP: 192.168.61.147
	I0908 14:47:52.176384 1161261 certs.go:194] generating shared ca certs ...
	I0908 14:47:52.176403 1161261 certs.go:226] acquiring lock for ca certs: {Name:mk10dcd85eee4d8b0413bd848f61156bf964b6ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:47:52.176662 1161261 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21508-1116714/.minikube/ca.key
	I0908 14:47:52.176721 1161261 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21508-1116714/.minikube/proxy-client-ca.key
	I0908 14:47:52.176735 1161261 certs.go:256] generating profile certs ...
	I0908 14:47:52.176854 1161261 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/pause-120061/client.key
	I0908 14:47:52.176942 1161261 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/pause-120061/apiserver.key.71e213e0
	I0908 14:47:52.177028 1161261 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/pause-120061/proxy-client.key
	I0908 14:47:52.177196 1161261 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/1120875.pem (1338 bytes)
	W0908 14:47:52.177239 1161261 certs.go:480] ignoring /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/1120875_empty.pem, impossibly tiny 0 bytes
	I0908 14:47:52.177253 1161261 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca-key.pem (1679 bytes)
	I0908 14:47:52.177292 1161261 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca.pem (1082 bytes)
	I0908 14:47:52.177334 1161261 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/cert.pem (1123 bytes)
	I0908 14:47:52.177362 1161261 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/key.pem (1675 bytes)
	I0908 14:47:52.177417 1161261 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-1116714/.minikube/files/etc/ssl/certs/11208752.pem (1708 bytes)
	I0908 14:47:52.178125 1161261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0908 14:47:52.216860 1161261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0908 14:47:52.264992 1161261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0908 14:47:52.315906 1161261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0908 14:47:52.366512 1161261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/pause-120061/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0908 14:47:52.407534 1161261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/pause-120061/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0908 14:47:52.457127 1161261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/pause-120061/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0908 14:47:52.505152 1161261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/pause-120061/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0908 14:47:52.549547 1161261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/files/etc/ssl/certs/11208752.pem --> /usr/share/ca-certificates/11208752.pem (1708 bytes)
	I0908 14:47:52.588151 1161261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0908 14:47:52.629239 1161261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/1120875.pem --> /usr/share/ca-certificates/1120875.pem (1338 bytes)
	I0908 14:47:52.666334 1161261 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0908 14:47:52.692809 1161261 ssh_runner.go:195] Run: openssl version
	I0908 14:47:52.700407 1161261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11208752.pem && ln -fs /usr/share/ca-certificates/11208752.pem /etc/ssl/certs/11208752.pem"
	I0908 14:47:52.717734 1161261 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11208752.pem
	I0908 14:47:52.725301 1161261 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  8 13:46 /usr/share/ca-certificates/11208752.pem
	I0908 14:47:52.725396 1161261 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11208752.pem
	I0908 14:47:52.735515 1161261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11208752.pem /etc/ssl/certs/3ec20f2e.0"
	I0908 14:47:52.751195 1161261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0908 14:47:52.769652 1161261 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0908 14:47:52.777129 1161261 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  8 13:35 /usr/share/ca-certificates/minikubeCA.pem
	I0908 14:47:52.777209 1161261 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0908 14:47:52.787042 1161261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0908 14:47:52.803329 1161261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1120875.pem && ln -fs /usr/share/ca-certificates/1120875.pem /etc/ssl/certs/1120875.pem"
	I0908 14:47:52.822959 1161261 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1120875.pem
	I0908 14:47:52.831158 1161261 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  8 13:46 /usr/share/ca-certificates/1120875.pem
	I0908 14:47:52.831251 1161261 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1120875.pem
	I0908 14:47:52.848780 1161261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1120875.pem /etc/ssl/certs/51391683.0"
	I0908 14:47:52.910305 1161261 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0908 14:47:52.947063 1161261 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0908 14:47:52.980746 1161261 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0908 14:47:53.017172 1161261 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0908 14:47:53.029502 1161261 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0908 14:47:53.050518 1161261 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0908 14:47:53.066057 1161261 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0908 14:47:53.090136 1161261 kubeadm.go:392] StartCluster: {Name:pause-120061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 Cl
usterName:pause-120061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.147 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 14:47:53.090336 1161261 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0908 14:47:53.090436 1161261 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0908 14:47:53.258288 1161261 cri.go:89] found id: "f396885ab602525616471c4a3078ab5befab72cec72eb50c586e5eb321dbf922"
	I0908 14:47:53.258340 1161261 cri.go:89] found id: "6f6f4bdc578435a925c85945bddfe6a5ac8b51b3cc376b776a33a1d585bd2c29"
	I0908 14:47:53.258348 1161261 cri.go:89] found id: "6936912d89250ecd151886026e92e7d034661849c0bfab75a31547b61a0fe66a"
	I0908 14:47:53.258352 1161261 cri.go:89] found id: "ee305c82781917bfbaab4b509ef785aeb3b96bd60c2ec05530b1c3d48a225512"
	I0908 14:47:53.258356 1161261 cri.go:89] found id: "06f87ac3295d31633f69192af6ed4823f0bf18648983434dcaa6db09d069d6bd"
	I0908 14:47:53.258361 1161261 cri.go:89] found id: "8ed8110fce0f009048f3aca5ce0a9a67946864f102d5a3e3a5da1c1053c5cb04"
	I0908 14:47:53.258366 1161261 cri.go:89] found id: ""
	I0908 14:47:53.258430 1161261 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-120061 -n pause-120061
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-120061 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-120061 logs -n 25: (2.88949883s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────────────
──┐
	│ COMMAND │                                                                                                                    ARGS                                                                                                                     │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────────────
──┤
	│ ssh     │ -p cilium-814283 sudo cri-dockerd --version                                                                                                                                                                                                 │ cilium-814283             │ jenkins │ v1.36.0 │ 08 Sep 25 14:45 UTC │                     │
	│ ssh     │ -p cilium-814283 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                   │ cilium-814283             │ jenkins │ v1.36.0 │ 08 Sep 25 14:45 UTC │                     │
	│ ssh     │ -p cilium-814283 sudo systemctl cat containerd --no-pager                                                                                                                                                                                   │ cilium-814283             │ jenkins │ v1.36.0 │ 08 Sep 25 14:45 UTC │                     │
	│ ssh     │ -p cilium-814283 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                            │ cilium-814283             │ jenkins │ v1.36.0 │ 08 Sep 25 14:45 UTC │                     │
	│ ssh     │ -p cilium-814283 sudo cat /etc/containerd/config.toml                                                                                                                                                                                       │ cilium-814283             │ jenkins │ v1.36.0 │ 08 Sep 25 14:45 UTC │                     │
	│ ssh     │ -p cilium-814283 sudo containerd config dump                                                                                                                                                                                                │ cilium-814283             │ jenkins │ v1.36.0 │ 08 Sep 25 14:45 UTC │                     │
	│ ssh     │ -p cilium-814283 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                         │ cilium-814283             │ jenkins │ v1.36.0 │ 08 Sep 25 14:45 UTC │                     │
	│ ssh     │ -p cilium-814283 sudo systemctl cat crio --no-pager                                                                                                                                                                                         │ cilium-814283             │ jenkins │ v1.36.0 │ 08 Sep 25 14:45 UTC │                     │
	│ ssh     │ -p cilium-814283 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                               │ cilium-814283             │ jenkins │ v1.36.0 │ 08 Sep 25 14:45 UTC │                     │
	│ ssh     │ -p cilium-814283 sudo crio config                                                                                                                                                                                                           │ cilium-814283             │ jenkins │ v1.36.0 │ 08 Sep 25 14:45 UTC │                     │
	│ delete  │ -p cilium-814283                                                                                                                                                                                                                            │ cilium-814283             │ jenkins │ v1.36.0 │ 08 Sep 25 14:45 UTC │ 08 Sep 25 14:45 UTC │
	│ start   │ -p force-systemd-flag-847393 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                                                                                                   │ force-systemd-flag-847393 │ jenkins │ v1.36.0 │ 08 Sep 25 14:45 UTC │ 08 Sep 25 14:46 UTC │
	│ delete  │ -p cert-expiration-001432                                                                                                                                                                                                                   │ cert-expiration-001432    │ jenkins │ v1.36.0 │ 08 Sep 25 14:45 UTC │ 08 Sep 25 14:45 UTC │
	│ start   │ -p cert-options-110049 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio                     │ cert-options-110049       │ jenkins │ v1.36.0 │ 08 Sep 25 14:45 UTC │ 08 Sep 25 14:47 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile running-upgrade-448633 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker                                                                                 │ running-upgrade-448633    │ jenkins │ v1.36.0 │ 08 Sep 25 14:46 UTC │                     │
	│ delete  │ -p running-upgrade-448633                                                                                                                                                                                                                   │ running-upgrade-448633    │ jenkins │ v1.36.0 │ 08 Sep 25 14:46 UTC │ 08 Sep 25 14:46 UTC │
	│ start   │ -p old-k8s-version-454279 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-454279    │ jenkins │ v1.36.0 │ 08 Sep 25 14:46 UTC │                     │
	│ ssh     │ force-systemd-flag-847393 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                        │ force-systemd-flag-847393 │ jenkins │ v1.36.0 │ 08 Sep 25 14:46 UTC │ 08 Sep 25 14:46 UTC │
	│ delete  │ -p force-systemd-flag-847393                                                                                                                                                                                                                │ force-systemd-flag-847393 │ jenkins │ v1.36.0 │ 08 Sep 25 14:46 UTC │ 08 Sep 25 14:46 UTC │
	│ start   │ -p no-preload-301894 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                       │ no-preload-301894         │ jenkins │ v1.36.0 │ 08 Sep 25 14:46 UTC │                     │
	│ start   │ -p pause-120061 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                                                                                              │ pause-120061              │ jenkins │ v1.36.0 │ 08 Sep 25 14:46 UTC │ 08 Sep 25 14:48 UTC │
	│ ssh     │ cert-options-110049 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                 │ cert-options-110049       │ jenkins │ v1.36.0 │ 08 Sep 25 14:47 UTC │ 08 Sep 25 14:47 UTC │
	│ ssh     │ -p cert-options-110049 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                               │ cert-options-110049       │ jenkins │ v1.36.0 │ 08 Sep 25 14:47 UTC │ 08 Sep 25 14:47 UTC │
	│ delete  │ -p cert-options-110049                                                                                                                                                                                                                      │ cert-options-110049       │ jenkins │ v1.36.0 │ 08 Sep 25 14:47 UTC │ 08 Sep 25 14:47 UTC │
	│ start   │ -p embed-certs-372004 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                        │ embed-certs-372004        │ jenkins │ v1.36.0 │ 08 Sep 25 14:47 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────────────
──┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 14:47:09
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 14:47:09.160568 1161554 out.go:360] Setting OutFile to fd 1 ...
	I0908 14:47:09.160683 1161554 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 14:47:09.160689 1161554 out.go:374] Setting ErrFile to fd 2...
	I0908 14:47:09.160695 1161554 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 14:47:09.160939 1161554 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-1116714/.minikube/bin
	I0908 14:47:09.161680 1161554 out.go:368] Setting JSON to false
	I0908 14:47:09.162744 1161554 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":19773,"bootTime":1757323056,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 14:47:09.162871 1161554 start.go:140] virtualization: kvm guest
	I0908 14:47:09.165021 1161554 out.go:179] * [embed-certs-372004] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0908 14:47:09.166691 1161554 notify.go:220] Checking for updates...
	I0908 14:47:09.166731 1161554 out.go:179]   - MINIKUBE_LOCATION=21508
	I0908 14:47:09.168900 1161554 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 14:47:09.170377 1161554 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21508-1116714/kubeconfig
	I0908 14:47:09.171507 1161554 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-1116714/.minikube
	I0908 14:47:09.172730 1161554 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 14:47:09.173985 1161554 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 14:47:09.175713 1161554 config.go:182] Loaded profile config "no-preload-301894": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 14:47:09.175835 1161554 config.go:182] Loaded profile config "old-k8s-version-454279": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I0908 14:47:09.175952 1161554 config.go:182] Loaded profile config "pause-120061": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 14:47:09.176071 1161554 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 14:47:09.218786 1161554 out.go:179] * Using the kvm2 driver based on user configuration
	I0908 14:47:09.220218 1161554 start.go:304] selected driver: kvm2
	I0908 14:47:09.220247 1161554 start.go:918] validating driver "kvm2" against <nil>
	I0908 14:47:09.220264 1161554 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 14:47:09.221394 1161554 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 14:47:09.221493 1161554 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21508-1116714/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0908 14:47:09.238868 1161554 install.go:137] /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0908 14:47:09.238946 1161554 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0908 14:47:09.239238 1161554 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 14:47:09.239288 1161554 cni.go:84] Creating CNI manager for ""
	I0908 14:47:09.239343 1161554 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0908 14:47:09.239356 1161554 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0908 14:47:09.239447 1161554 start.go:348] cluster config:
	{Name:embed-certs-372004 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:embed-certs-372004 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I0908 14:47:09.239572 1161554 iso.go:125] acquiring lock: {Name:mk347390bf24761f2c39bf4cd5b718f157a50faa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 14:47:09.241462 1161554 out.go:179] * Starting "embed-certs-372004" primary control-plane node in "embed-certs-372004" cluster
	I0908 14:47:13.234419 1161065 start.go:364] duration metric: took 31.013485176s to acquireMachinesLock for "no-preload-301894"
	I0908 14:47:13.234502 1161065 start.go:93] Provisioning new machine with config: &{Name:no-preload-301894 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.34.0 ClusterName:no-preload-301894 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0908 14:47:13.234615 1161065 start.go:125] createHost starting for "" (driver="kvm2")
	I0908 14:47:08.421613 1160669 main.go:141] libmachine: (old-k8s-version-454279) reserved static IP address 192.168.50.48 for domain old-k8s-version-454279
	I0908 14:47:08.421639 1160669 main.go:141] libmachine: (old-k8s-version-454279) waiting for SSH...
	I0908 14:47:08.421827 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | Getting to WaitForSSH function...
	I0908 14:47:08.425019 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:08.425509 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:78:56:ae", ip: ""} in network mk-old-k8s-version-454279
	I0908 14:47:08.425534 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | unable to find defined IP address of network mk-old-k8s-version-454279 interface with MAC address 52:54:00:78:56:ae
	I0908 14:47:08.425750 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | Using SSH client type: external
	I0908 14:47:08.425784 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | Using SSH private key: /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/old-k8s-version-454279/id_rsa (-rw-------)
	I0908 14:47:08.425843 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/old-k8s-version-454279/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0908 14:47:08.425862 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | About to run SSH command:
	I0908 14:47:08.425880 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | exit 0
	I0908 14:47:08.430385 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | SSH cmd err, output: exit status 255: 
	I0908 14:47:08.430424 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0908 14:47:08.430434 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | command : exit 0
	I0908 14:47:08.430439 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | err     : exit status 255
	I0908 14:47:08.430448 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | output  : 
	I0908 14:47:11.432171 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | Getting to WaitForSSH function...
	I0908 14:47:11.435749 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:11.436378 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:56:ae", ip: ""} in network mk-old-k8s-version-454279: {Iface:virbr3 ExpiryTime:2025-09-08 15:47:00 +0000 UTC Type:0 Mac:52:54:00:78:56:ae Iaid: IPaddr:192.168.50.48 Prefix:24 Hostname:old-k8s-version-454279 Clientid:01:52:54:00:78:56:ae}
	I0908 14:47:11.436414 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined IP address 192.168.50.48 and MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:11.436668 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | Using SSH client type: external
	I0908 14:47:11.436689 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | Using SSH private key: /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/old-k8s-version-454279/id_rsa (-rw-------)
	I0908 14:47:11.436753 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.48 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/old-k8s-version-454279/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0908 14:47:11.436774 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | About to run SSH command:
	I0908 14:47:11.436787 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | exit 0
	I0908 14:47:11.569076 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | SSH cmd err, output: <nil>: 
	I0908 14:47:11.569330 1160669 main.go:141] libmachine: (old-k8s-version-454279) KVM machine creation complete
	I0908 14:47:11.569697 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetConfigRaw
	I0908 14:47:11.570442 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .DriverName
	I0908 14:47:11.570678 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .DriverName
	I0908 14:47:11.570867 1160669 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0908 14:47:11.570882 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetState
	I0908 14:47:11.572530 1160669 main.go:141] libmachine: Detecting operating system of created instance...
	I0908 14:47:11.572548 1160669 main.go:141] libmachine: Waiting for SSH to be available...
	I0908 14:47:11.572554 1160669 main.go:141] libmachine: Getting to WaitForSSH function...
	I0908 14:47:11.572562 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHHostname
	I0908 14:47:11.575449 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:11.575866 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:56:ae", ip: ""} in network mk-old-k8s-version-454279: {Iface:virbr3 ExpiryTime:2025-09-08 15:47:00 +0000 UTC Type:0 Mac:52:54:00:78:56:ae Iaid: IPaddr:192.168.50.48 Prefix:24 Hostname:old-k8s-version-454279 Clientid:01:52:54:00:78:56:ae}
	I0908 14:47:11.575893 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined IP address 192.168.50.48 and MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:11.576075 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHPort
	I0908 14:47:11.576303 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHKeyPath
	I0908 14:47:11.576473 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHKeyPath
	I0908 14:47:11.576619 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHUsername
	I0908 14:47:11.576834 1160669 main.go:141] libmachine: Using SSH client type: native
	I0908 14:47:11.577105 1160669 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.50.48 22 <nil> <nil>}
	I0908 14:47:11.577117 1160669 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0908 14:47:11.696175 1160669 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 14:47:11.696207 1160669 main.go:141] libmachine: Detecting the provisioner...
	I0908 14:47:11.696217 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHHostname
	I0908 14:47:11.699719 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:11.700138 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:56:ae", ip: ""} in network mk-old-k8s-version-454279: {Iface:virbr3 ExpiryTime:2025-09-08 15:47:00 +0000 UTC Type:0 Mac:52:54:00:78:56:ae Iaid: IPaddr:192.168.50.48 Prefix:24 Hostname:old-k8s-version-454279 Clientid:01:52:54:00:78:56:ae}
	I0908 14:47:11.700159 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined IP address 192.168.50.48 and MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:11.700334 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHPort
	I0908 14:47:11.700589 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHKeyPath
	I0908 14:47:11.700796 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHKeyPath
	I0908 14:47:11.700947 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHUsername
	I0908 14:47:11.701143 1160669 main.go:141] libmachine: Using SSH client type: native
	I0908 14:47:11.701350 1160669 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.50.48 22 <nil> <nil>}
	I0908 14:47:11.701361 1160669 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0908 14:47:11.821894 1160669 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I0908 14:47:11.822006 1160669 main.go:141] libmachine: found compatible host: buildroot
	I0908 14:47:11.822037 1160669 main.go:141] libmachine: Provisioning with buildroot...
	I0908 14:47:11.822052 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetMachineName
	I0908 14:47:11.822417 1160669 buildroot.go:166] provisioning hostname "old-k8s-version-454279"
	I0908 14:47:11.822451 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetMachineName
	I0908 14:47:11.822694 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHHostname
	I0908 14:47:11.827383 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:11.827954 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:56:ae", ip: ""} in network mk-old-k8s-version-454279: {Iface:virbr3 ExpiryTime:2025-09-08 15:47:00 +0000 UTC Type:0 Mac:52:54:00:78:56:ae Iaid: IPaddr:192.168.50.48 Prefix:24 Hostname:old-k8s-version-454279 Clientid:01:52:54:00:78:56:ae}
	I0908 14:47:11.827998 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined IP address 192.168.50.48 and MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:11.828197 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHPort
	I0908 14:47:11.828461 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHKeyPath
	I0908 14:47:11.828657 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHKeyPath
	I0908 14:47:11.828803 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHUsername
	I0908 14:47:11.829021 1160669 main.go:141] libmachine: Using SSH client type: native
	I0908 14:47:11.829259 1160669 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.50.48 22 <nil> <nil>}
	I0908 14:47:11.829278 1160669 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-454279 && echo "old-k8s-version-454279" | sudo tee /etc/hostname
	I0908 14:47:11.970256 1160669 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-454279
	
	I0908 14:47:11.970285 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHHostname
	I0908 14:47:11.973594 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:11.974161 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:56:ae", ip: ""} in network mk-old-k8s-version-454279: {Iface:virbr3 ExpiryTime:2025-09-08 15:47:00 +0000 UTC Type:0 Mac:52:54:00:78:56:ae Iaid: IPaddr:192.168.50.48 Prefix:24 Hostname:old-k8s-version-454279 Clientid:01:52:54:00:78:56:ae}
	I0908 14:47:11.974183 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined IP address 192.168.50.48 and MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:11.974497 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHPort
	I0908 14:47:11.974721 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHKeyPath
	I0908 14:47:11.974906 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHKeyPath
	I0908 14:47:11.975126 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHUsername
	I0908 14:47:11.975320 1160669 main.go:141] libmachine: Using SSH client type: native
	I0908 14:47:11.975562 1160669 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.50.48 22 <nil> <nil>}
	I0908 14:47:11.975605 1160669 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-454279' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-454279/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-454279' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 14:47:12.104712 1160669 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 14:47:12.104744 1160669 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21508-1116714/.minikube CaCertPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21508-1116714/.minikube}
	I0908 14:47:12.104764 1160669 buildroot.go:174] setting up certificates
	I0908 14:47:12.104774 1160669 provision.go:84] configureAuth start
	I0908 14:47:12.104783 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetMachineName
	I0908 14:47:12.105185 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetIP
	I0908 14:47:12.108318 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:12.108694 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:56:ae", ip: ""} in network mk-old-k8s-version-454279: {Iface:virbr3 ExpiryTime:2025-09-08 15:47:00 +0000 UTC Type:0 Mac:52:54:00:78:56:ae Iaid: IPaddr:192.168.50.48 Prefix:24 Hostname:old-k8s-version-454279 Clientid:01:52:54:00:78:56:ae}
	I0908 14:47:12.108727 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined IP address 192.168.50.48 and MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:12.109039 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHHostname
	I0908 14:47:12.111754 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:12.112092 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:56:ae", ip: ""} in network mk-old-k8s-version-454279: {Iface:virbr3 ExpiryTime:2025-09-08 15:47:00 +0000 UTC Type:0 Mac:52:54:00:78:56:ae Iaid: IPaddr:192.168.50.48 Prefix:24 Hostname:old-k8s-version-454279 Clientid:01:52:54:00:78:56:ae}
	I0908 14:47:12.112124 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined IP address 192.168.50.48 and MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:12.112306 1160669 provision.go:143] copyHostCerts
	I0908 14:47:12.112402 1160669 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-1116714/.minikube/key.pem, removing ...
	I0908 14:47:12.112418 1160669 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-1116714/.minikube/key.pem
	I0908 14:47:12.112486 1160669 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21508-1116714/.minikube/key.pem (1675 bytes)
	I0908 14:47:12.112586 1160669 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-1116714/.minikube/ca.pem, removing ...
	I0908 14:47:12.112595 1160669 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-1116714/.minikube/ca.pem
	I0908 14:47:12.112614 1160669 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21508-1116714/.minikube/ca.pem (1082 bytes)
	I0908 14:47:12.112663 1160669 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-1116714/.minikube/cert.pem, removing ...
	I0908 14:47:12.112670 1160669 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-1116714/.minikube/cert.pem
	I0908 14:47:12.112687 1160669 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21508-1116714/.minikube/cert.pem (1123 bytes)
	I0908 14:47:12.112731 1160669 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-454279 san=[127.0.0.1 192.168.50.48 localhost minikube old-k8s-version-454279]
	I0908 14:47:12.456603 1160669 provision.go:177] copyRemoteCerts
	I0908 14:47:12.456689 1160669 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 14:47:12.456720 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHHostname
	I0908 14:47:12.459997 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:12.460440 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:56:ae", ip: ""} in network mk-old-k8s-version-454279: {Iface:virbr3 ExpiryTime:2025-09-08 15:47:00 +0000 UTC Type:0 Mac:52:54:00:78:56:ae Iaid: IPaddr:192.168.50.48 Prefix:24 Hostname:old-k8s-version-454279 Clientid:01:52:54:00:78:56:ae}
	I0908 14:47:12.460462 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined IP address 192.168.50.48 and MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:12.460632 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHPort
	I0908 14:47:12.460892 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHKeyPath
	I0908 14:47:12.461102 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHUsername
	I0908 14:47:12.461282 1160669 sshutil.go:53] new ssh client: &{IP:192.168.50.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/old-k8s-version-454279/id_rsa Username:docker}
	I0908 14:47:12.555929 1160669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0908 14:47:12.587739 1160669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0908 14:47:12.619560 1160669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0908 14:47:12.653024 1160669 provision.go:87] duration metric: took 548.233152ms to configureAuth
	I0908 14:47:12.653061 1160669 buildroot.go:189] setting minikube options for container-runtime
	I0908 14:47:12.653249 1160669 config.go:182] Loaded profile config "old-k8s-version-454279": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I0908 14:47:12.653344 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHHostname
	I0908 14:47:12.656324 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:12.656711 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:56:ae", ip: ""} in network mk-old-k8s-version-454279: {Iface:virbr3 ExpiryTime:2025-09-08 15:47:00 +0000 UTC Type:0 Mac:52:54:00:78:56:ae Iaid: IPaddr:192.168.50.48 Prefix:24 Hostname:old-k8s-version-454279 Clientid:01:52:54:00:78:56:ae}
	I0908 14:47:12.656762 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined IP address 192.168.50.48 and MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:12.656968 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHPort
	I0908 14:47:12.657232 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHKeyPath
	I0908 14:47:12.657399 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHKeyPath
	I0908 14:47:12.657567 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHUsername
	I0908 14:47:12.657755 1160669 main.go:141] libmachine: Using SSH client type: native
	I0908 14:47:12.657974 1160669 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.50.48 22 <nil> <nil>}
	I0908 14:47:12.657989 1160669 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0908 14:47:12.942523 1160669 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0908 14:47:12.942555 1160669 main.go:141] libmachine: Checking connection to Docker...
	I0908 14:47:12.942568 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetURL
	I0908 14:47:12.944034 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | using libvirt version 6000000
	I0908 14:47:12.947008 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:12.947476 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:56:ae", ip: ""} in network mk-old-k8s-version-454279: {Iface:virbr3 ExpiryTime:2025-09-08 15:47:00 +0000 UTC Type:0 Mac:52:54:00:78:56:ae Iaid: IPaddr:192.168.50.48 Prefix:24 Hostname:old-k8s-version-454279 Clientid:01:52:54:00:78:56:ae}
	I0908 14:47:12.947510 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined IP address 192.168.50.48 and MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:12.947716 1160669 main.go:141] libmachine: Docker is up and running!
	I0908 14:47:12.947731 1160669 main.go:141] libmachine: Reticulating splines...
	I0908 14:47:12.947738 1160669 client.go:171] duration metric: took 28.558043276s to LocalClient.Create
	I0908 14:47:12.947766 1160669 start.go:167] duration metric: took 28.55812507s to libmachine.API.Create "old-k8s-version-454279"
	I0908 14:47:12.947781 1160669 start.go:293] postStartSetup for "old-k8s-version-454279" (driver="kvm2")
	I0908 14:47:12.947797 1160669 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 14:47:12.947820 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .DriverName
	I0908 14:47:12.948102 1160669 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 14:47:12.948128 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHHostname
	I0908 14:47:12.950626 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:12.950966 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:56:ae", ip: ""} in network mk-old-k8s-version-454279: {Iface:virbr3 ExpiryTime:2025-09-08 15:47:00 +0000 UTC Type:0 Mac:52:54:00:78:56:ae Iaid: IPaddr:192.168.50.48 Prefix:24 Hostname:old-k8s-version-454279 Clientid:01:52:54:00:78:56:ae}
	I0908 14:47:12.950991 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined IP address 192.168.50.48 and MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:12.951166 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHPort
	I0908 14:47:12.951368 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHKeyPath
	I0908 14:47:12.951564 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHUsername
	I0908 14:47:12.951709 1160669 sshutil.go:53] new ssh client: &{IP:192.168.50.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/old-k8s-version-454279/id_rsa Username:docker}
	I0908 14:47:13.045854 1160669 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 14:47:13.051916 1160669 info.go:137] Remote host: Buildroot 2025.02
	I0908 14:47:13.051954 1160669 filesync.go:126] Scanning /home/jenkins/minikube-integration/21508-1116714/.minikube/addons for local assets ...
	I0908 14:47:13.052050 1160669 filesync.go:126] Scanning /home/jenkins/minikube-integration/21508-1116714/.minikube/files for local assets ...
	I0908 14:47:13.052159 1160669 filesync.go:149] local asset: /home/jenkins/minikube-integration/21508-1116714/.minikube/files/etc/ssl/certs/11208752.pem -> 11208752.pem in /etc/ssl/certs
	I0908 14:47:13.052292 1160669 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0908 14:47:13.066204 1160669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/files/etc/ssl/certs/11208752.pem --> /etc/ssl/certs/11208752.pem (1708 bytes)
	I0908 14:47:13.105685 1160669 start.go:296] duration metric: took 157.882889ms for postStartSetup
	I0908 14:47:13.105743 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetConfigRaw
	I0908 14:47:13.106484 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetIP
	I0908 14:47:13.109310 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:13.109734 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:56:ae", ip: ""} in network mk-old-k8s-version-454279: {Iface:virbr3 ExpiryTime:2025-09-08 15:47:00 +0000 UTC Type:0 Mac:52:54:00:78:56:ae Iaid: IPaddr:192.168.50.48 Prefix:24 Hostname:old-k8s-version-454279 Clientid:01:52:54:00:78:56:ae}
	I0908 14:47:13.109760 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined IP address 192.168.50.48 and MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:13.110162 1160669 profile.go:143] Saving config to /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/config.json ...
	I0908 14:47:13.110392 1160669 start.go:128] duration metric: took 28.743951957s to createHost
	I0908 14:47:13.110424 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHHostname
	I0908 14:47:13.113374 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:13.113818 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:56:ae", ip: ""} in network mk-old-k8s-version-454279: {Iface:virbr3 ExpiryTime:2025-09-08 15:47:00 +0000 UTC Type:0 Mac:52:54:00:78:56:ae Iaid: IPaddr:192.168.50.48 Prefix:24 Hostname:old-k8s-version-454279 Clientid:01:52:54:00:78:56:ae}
	I0908 14:47:13.113847 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined IP address 192.168.50.48 and MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:13.114065 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHPort
	I0908 14:47:13.114325 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHKeyPath
	I0908 14:47:13.114518 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHKeyPath
	I0908 14:47:13.114687 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHUsername
	I0908 14:47:13.114875 1160669 main.go:141] libmachine: Using SSH client type: native
	I0908 14:47:13.115133 1160669 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.50.48 22 <nil> <nil>}
	I0908 14:47:13.115147 1160669 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0908 14:47:13.234172 1160669 main.go:141] libmachine: SSH cmd err, output: <nil>: 1757342833.210015423
	
	I0908 14:47:13.234205 1160669 fix.go:216] guest clock: 1757342833.210015423
	I0908 14:47:13.234217 1160669 fix.go:229] Guest: 2025-09-08 14:47:13.210015423 +0000 UTC Remote: 2025-09-08 14:47:13.110406104 +0000 UTC m=+59.772811959 (delta=99.609319ms)
	I0908 14:47:13.234297 1160669 fix.go:200] guest clock delta is within tolerance: 99.609319ms
	I0908 14:47:13.234318 1160669 start.go:83] releasing machines lock for "old-k8s-version-454279", held for 28.868136263s
	I0908 14:47:13.234361 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .DriverName
	I0908 14:47:13.234709 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetIP
	I0908 14:47:13.237700 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:13.238266 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:56:ae", ip: ""} in network mk-old-k8s-version-454279: {Iface:virbr3 ExpiryTime:2025-09-08 15:47:00 +0000 UTC Type:0 Mac:52:54:00:78:56:ae Iaid: IPaddr:192.168.50.48 Prefix:24 Hostname:old-k8s-version-454279 Clientid:01:52:54:00:78:56:ae}
	I0908 14:47:13.238303 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined IP address 192.168.50.48 and MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:13.238541 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .DriverName
	I0908 14:47:13.239356 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .DriverName
	I0908 14:47:13.239606 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .DriverName
	I0908 14:47:13.239737 1160669 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0908 14:47:13.239792 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHHostname
	I0908 14:47:13.239871 1160669 ssh_runner.go:195] Run: cat /version.json
	I0908 14:47:13.239905 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHHostname
	I0908 14:47:13.243385 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:13.243476 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:13.243941 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:56:ae", ip: ""} in network mk-old-k8s-version-454279: {Iface:virbr3 ExpiryTime:2025-09-08 15:47:00 +0000 UTC Type:0 Mac:52:54:00:78:56:ae Iaid: IPaddr:192.168.50.48 Prefix:24 Hostname:old-k8s-version-454279 Clientid:01:52:54:00:78:56:ae}
	I0908 14:47:13.243993 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined IP address 192.168.50.48 and MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:13.244146 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:56:ae", ip: ""} in network mk-old-k8s-version-454279: {Iface:virbr3 ExpiryTime:2025-09-08 15:47:00 +0000 UTC Type:0 Mac:52:54:00:78:56:ae Iaid: IPaddr:192.168.50.48 Prefix:24 Hostname:old-k8s-version-454279 Clientid:01:52:54:00:78:56:ae}
	I0908 14:47:13.244186 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined IP address 192.168.50.48 and MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:13.244280 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHPort
	I0908 14:47:13.244427 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHPort
	I0908 14:47:13.244566 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHKeyPath
	I0908 14:47:13.244670 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHKeyPath
	I0908 14:47:13.244696 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHUsername
	I0908 14:47:13.244876 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHUsername
	I0908 14:47:13.244964 1160669 sshutil.go:53] new ssh client: &{IP:192.168.50.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/old-k8s-version-454279/id_rsa Username:docker}
	I0908 14:47:13.245000 1160669 sshutil.go:53] new ssh client: &{IP:192.168.50.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/old-k8s-version-454279/id_rsa Username:docker}
	I0908 14:47:13.343174 1160669 ssh_runner.go:195] Run: systemctl --version
	I0908 14:47:13.377738 1160669 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0908 14:47:09.242571 1161554 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 14:47:09.242614 1161554 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0908 14:47:09.242628 1161554 cache.go:58] Caching tarball of preloaded images
	I0908 14:47:09.242720 1161554 preload.go:172] Found /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0908 14:47:09.242733 1161554 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0908 14:47:09.242856 1161554 profile.go:143] Saving config to /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/embed-certs-372004/config.json ...
	I0908 14:47:09.242886 1161554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/embed-certs-372004/config.json: {Name:mk36cbfc5ffff3b9800a8cb272fb6fc4e8a2f5b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:47:09.243050 1161554 start.go:360] acquireMachinesLock for embed-certs-372004: {Name:mk0626ae9b324aeb819357e3de70b05b9e4c30a5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0908 14:47:13.551387 1160669 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0908 14:47:13.560862 1160669 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0908 14:47:13.560956 1160669 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 14:47:13.585985 1160669 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0908 14:47:13.586024 1160669 start.go:495] detecting cgroup driver to use...
	I0908 14:47:13.586136 1160669 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0908 14:47:13.609341 1160669 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 14:47:13.630973 1160669 docker.go:218] disabling cri-docker service (if available) ...
	I0908 14:47:13.631082 1160669 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0908 14:47:13.651272 1160669 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0908 14:47:13.673082 1160669 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0908 14:47:13.830972 1160669 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0908 14:47:14.012858 1160669 docker.go:234] disabling docker service ...
	I0908 14:47:14.012936 1160669 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0908 14:47:14.034138 1160669 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0908 14:47:14.056076 1160669 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0908 14:47:14.298395 1160669 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0908 14:47:14.461146 1160669 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 14:47:14.479862 1160669 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 14:47:14.508390 1160669 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0908 14:47:14.508479 1160669 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 14:47:14.523751 1160669 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0908 14:47:14.523871 1160669 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 14:47:14.539963 1160669 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 14:47:14.555827 1160669 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 14:47:14.571980 1160669 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 14:47:14.589217 1160669 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 14:47:14.604726 1160669 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 14:47:14.636771 1160669 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 14:47:14.651552 1160669 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 14:47:14.665337 1160669 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0908 14:47:14.665417 1160669 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0908 14:47:14.690509 1160669 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 14:47:14.705109 1160669 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 14:47:14.866883 1160669 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0908 14:47:14.999587 1160669 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0908 14:47:14.999709 1160669 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0908 14:47:15.005990 1160669 start.go:563] Will wait 60s for crictl version
	I0908 14:47:15.006108 1160669 ssh_runner.go:195] Run: which crictl
	I0908 14:47:15.011598 1160669 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 14:47:15.061672 1160669 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0908 14:47:15.061784 1160669 ssh_runner.go:195] Run: crio --version
	I0908 14:47:15.096342 1160669 ssh_runner.go:195] Run: crio --version
	I0908 14:47:15.158474 1160669 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.29.1 ...
	I0908 14:47:13.236765 1161065 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0908 14:47:13.237043 1161065 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 14:47:13.237097 1161065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 14:47:13.257879 1161065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39459
	I0908 14:47:13.258443 1161065 main.go:141] libmachine: () Calling .GetVersion
	I0908 14:47:13.259015 1161065 main.go:141] libmachine: Using API Version  1
	I0908 14:47:13.259044 1161065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 14:47:13.259491 1161065 main.go:141] libmachine: () Calling .GetMachineName
	I0908 14:47:13.259748 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetMachineName
	I0908 14:47:13.259917 1161065 main.go:141] libmachine: (no-preload-301894) Calling .DriverName
	I0908 14:47:13.260103 1161065 start.go:159] libmachine.API.Create for "no-preload-301894" (driver="kvm2")
	I0908 14:47:13.260133 1161065 client.go:168] LocalClient.Create starting
	I0908 14:47:13.260171 1161065 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca.pem
	I0908 14:47:13.260211 1161065 main.go:141] libmachine: Decoding PEM data...
	I0908 14:47:13.260226 1161065 main.go:141] libmachine: Parsing certificate...
	I0908 14:47:13.260300 1161065 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/cert.pem
	I0908 14:47:13.260321 1161065 main.go:141] libmachine: Decoding PEM data...
	I0908 14:47:13.260332 1161065 main.go:141] libmachine: Parsing certificate...
	I0908 14:47:13.260346 1161065 main.go:141] libmachine: Running pre-create checks...
	I0908 14:47:13.260354 1161065 main.go:141] libmachine: (no-preload-301894) Calling .PreCreateCheck
	I0908 14:47:13.260713 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetConfigRaw
	I0908 14:47:13.261185 1161065 main.go:141] libmachine: Creating machine...
	I0908 14:47:13.261200 1161065 main.go:141] libmachine: (no-preload-301894) Calling .Create
	I0908 14:47:13.261374 1161065 main.go:141] libmachine: (no-preload-301894) creating KVM machine...
	I0908 14:47:13.261395 1161065 main.go:141] libmachine: (no-preload-301894) creating network...
	I0908 14:47:13.262893 1161065 main.go:141] libmachine: (no-preload-301894) DBG | found existing default KVM network
	I0908 14:47:13.264043 1161065 main.go:141] libmachine: (no-preload-301894) DBG | I0908 14:47:13.263851 1161595 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000013a80}
	I0908 14:47:13.264081 1161065 main.go:141] libmachine: (no-preload-301894) DBG | created network xml: 
	I0908 14:47:13.264101 1161065 main.go:141] libmachine: (no-preload-301894) DBG | <network>
	I0908 14:47:13.264114 1161065 main.go:141] libmachine: (no-preload-301894) DBG |   <name>mk-no-preload-301894</name>
	I0908 14:47:13.264124 1161065 main.go:141] libmachine: (no-preload-301894) DBG |   <dns enable='no'/>
	I0908 14:47:13.264134 1161065 main.go:141] libmachine: (no-preload-301894) DBG |   
	I0908 14:47:13.264149 1161065 main.go:141] libmachine: (no-preload-301894) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0908 14:47:13.264160 1161065 main.go:141] libmachine: (no-preload-301894) DBG |     <dhcp>
	I0908 14:47:13.264170 1161065 main.go:141] libmachine: (no-preload-301894) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0908 14:47:13.264183 1161065 main.go:141] libmachine: (no-preload-301894) DBG |     </dhcp>
	I0908 14:47:13.264193 1161065 main.go:141] libmachine: (no-preload-301894) DBG |   </ip>
	I0908 14:47:13.264204 1161065 main.go:141] libmachine: (no-preload-301894) DBG |   
	I0908 14:47:13.264215 1161065 main.go:141] libmachine: (no-preload-301894) DBG | </network>
	I0908 14:47:13.264229 1161065 main.go:141] libmachine: (no-preload-301894) DBG | 
	I0908 14:47:13.270638 1161065 main.go:141] libmachine: (no-preload-301894) DBG | trying to create private KVM network mk-no-preload-301894 192.168.39.0/24...
	I0908 14:47:13.368149 1161065 main.go:141] libmachine: (no-preload-301894) DBG | private KVM network mk-no-preload-301894 192.168.39.0/24 created
	I0908 14:47:13.368182 1161065 main.go:141] libmachine: (no-preload-301894) DBG | I0908 14:47:13.368076 1161595 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21508-1116714/.minikube
	I0908 14:47:13.368195 1161065 main.go:141] libmachine: (no-preload-301894) setting up store path in /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/no-preload-301894 ...
	I0908 14:47:13.368219 1161065 main.go:141] libmachine: (no-preload-301894) building disk image from file:///home/jenkins/minikube-integration/21508-1116714/.minikube/cache/iso/amd64/minikube-v1.36.0-1756980912-21488-amd64.iso
	I0908 14:47:13.368234 1161065 main.go:141] libmachine: (no-preload-301894) Downloading /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21508-1116714/.minikube/cache/iso/amd64/minikube-v1.36.0-1756980912-21488-amd64.iso...
	I0908 14:47:13.708843 1161065 main.go:141] libmachine: (no-preload-301894) DBG | I0908 14:47:13.708657 1161595 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/no-preload-301894/id_rsa...
	I0908 14:47:13.876885 1161065 main.go:141] libmachine: (no-preload-301894) DBG | I0908 14:47:13.876750 1161595 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/no-preload-301894/no-preload-301894.rawdisk...
	I0908 14:47:13.876910 1161065 main.go:141] libmachine: (no-preload-301894) DBG | Writing magic tar header
	I0908 14:47:13.876924 1161065 main.go:141] libmachine: (no-preload-301894) DBG | Writing SSH key tar header
	I0908 14:47:13.877045 1161065 main.go:141] libmachine: (no-preload-301894) DBG | I0908 14:47:13.876948 1161595 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/no-preload-301894 ...
	I0908 14:47:13.877145 1161065 main.go:141] libmachine: (no-preload-301894) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/no-preload-301894
	I0908 14:47:13.877178 1161065 main.go:141] libmachine: (no-preload-301894) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21508-1116714/.minikube/machines
	I0908 14:47:13.877201 1161065 main.go:141] libmachine: (no-preload-301894) setting executable bit set on /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/no-preload-301894 (perms=drwx------)
	I0908 14:47:13.877215 1161065 main.go:141] libmachine: (no-preload-301894) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21508-1116714/.minikube
	I0908 14:47:13.877231 1161065 main.go:141] libmachine: (no-preload-301894) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21508-1116714
	I0908 14:47:13.877244 1161065 main.go:141] libmachine: (no-preload-301894) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0908 14:47:13.877258 1161065 main.go:141] libmachine: (no-preload-301894) DBG | checking permissions on dir: /home/jenkins
	I0908 14:47:13.877270 1161065 main.go:141] libmachine: (no-preload-301894) DBG | checking permissions on dir: /home
	I0908 14:47:13.877284 1161065 main.go:141] libmachine: (no-preload-301894) setting executable bit set on /home/jenkins/minikube-integration/21508-1116714/.minikube/machines (perms=drwxr-xr-x)
	I0908 14:47:13.877309 1161065 main.go:141] libmachine: (no-preload-301894) setting executable bit set on /home/jenkins/minikube-integration/21508-1116714/.minikube (perms=drwxr-xr-x)
	I0908 14:47:13.877354 1161065 main.go:141] libmachine: (no-preload-301894) DBG | skipping /home - not owner
	I0908 14:47:13.877374 1161065 main.go:141] libmachine: (no-preload-301894) setting executable bit set on /home/jenkins/minikube-integration/21508-1116714 (perms=drwxrwxr-x)
	I0908 14:47:13.877390 1161065 main.go:141] libmachine: (no-preload-301894) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0908 14:47:13.877402 1161065 main.go:141] libmachine: (no-preload-301894) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0908 14:47:13.877414 1161065 main.go:141] libmachine: (no-preload-301894) creating domain...
	I0908 14:47:13.878601 1161065 main.go:141] libmachine: (no-preload-301894) define libvirt domain using xml: 
	I0908 14:47:13.878635 1161065 main.go:141] libmachine: (no-preload-301894) <domain type='kvm'>
	I0908 14:47:13.878647 1161065 main.go:141] libmachine: (no-preload-301894)   <name>no-preload-301894</name>
	I0908 14:47:13.878661 1161065 main.go:141] libmachine: (no-preload-301894)   <memory unit='MiB'>3072</memory>
	I0908 14:47:13.878671 1161065 main.go:141] libmachine: (no-preload-301894)   <vcpu>2</vcpu>
	I0908 14:47:13.878677 1161065 main.go:141] libmachine: (no-preload-301894)   <features>
	I0908 14:47:13.878688 1161065 main.go:141] libmachine: (no-preload-301894)     <acpi/>
	I0908 14:47:13.878697 1161065 main.go:141] libmachine: (no-preload-301894)     <apic/>
	I0908 14:47:13.878706 1161065 main.go:141] libmachine: (no-preload-301894)     <pae/>
	I0908 14:47:13.878715 1161065 main.go:141] libmachine: (no-preload-301894)     
	I0908 14:47:13.878725 1161065 main.go:141] libmachine: (no-preload-301894)   </features>
	I0908 14:47:13.878735 1161065 main.go:141] libmachine: (no-preload-301894)   <cpu mode='host-passthrough'>
	I0908 14:47:13.878743 1161065 main.go:141] libmachine: (no-preload-301894)   
	I0908 14:47:13.878752 1161065 main.go:141] libmachine: (no-preload-301894)   </cpu>
	I0908 14:47:13.878784 1161065 main.go:141] libmachine: (no-preload-301894)   <os>
	I0908 14:47:13.878810 1161065 main.go:141] libmachine: (no-preload-301894)     <type>hvm</type>
	I0908 14:47:13.878822 1161065 main.go:141] libmachine: (no-preload-301894)     <boot dev='cdrom'/>
	I0908 14:47:13.878829 1161065 main.go:141] libmachine: (no-preload-301894)     <boot dev='hd'/>
	I0908 14:47:13.878842 1161065 main.go:141] libmachine: (no-preload-301894)     <bootmenu enable='no'/>
	I0908 14:47:13.878851 1161065 main.go:141] libmachine: (no-preload-301894)   </os>
	I0908 14:47:13.878859 1161065 main.go:141] libmachine: (no-preload-301894)   <devices>
	I0908 14:47:13.878869 1161065 main.go:141] libmachine: (no-preload-301894)     <disk type='file' device='cdrom'>
	I0908 14:47:13.878887 1161065 main.go:141] libmachine: (no-preload-301894)       <source file='/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/no-preload-301894/boot2docker.iso'/>
	I0908 14:47:13.878903 1161065 main.go:141] libmachine: (no-preload-301894)       <target dev='hdc' bus='scsi'/>
	I0908 14:47:13.878914 1161065 main.go:141] libmachine: (no-preload-301894)       <readonly/>
	I0908 14:47:13.878924 1161065 main.go:141] libmachine: (no-preload-301894)     </disk>
	I0908 14:47:13.878934 1161065 main.go:141] libmachine: (no-preload-301894)     <disk type='file' device='disk'>
	I0908 14:47:13.878947 1161065 main.go:141] libmachine: (no-preload-301894)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0908 14:47:13.878963 1161065 main.go:141] libmachine: (no-preload-301894)       <source file='/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/no-preload-301894/no-preload-301894.rawdisk'/>
	I0908 14:47:13.878973 1161065 main.go:141] libmachine: (no-preload-301894)       <target dev='hda' bus='virtio'/>
	I0908 14:47:13.879001 1161065 main.go:141] libmachine: (no-preload-301894)     </disk>
	I0908 14:47:13.879031 1161065 main.go:141] libmachine: (no-preload-301894)     <interface type='network'>
	I0908 14:47:13.879042 1161065 main.go:141] libmachine: (no-preload-301894)       <source network='mk-no-preload-301894'/>
	I0908 14:47:13.879050 1161065 main.go:141] libmachine: (no-preload-301894)       <model type='virtio'/>
	I0908 14:47:13.879059 1161065 main.go:141] libmachine: (no-preload-301894)     </interface>
	I0908 14:47:13.879069 1161065 main.go:141] libmachine: (no-preload-301894)     <interface type='network'>
	I0908 14:47:13.879079 1161065 main.go:141] libmachine: (no-preload-301894)       <source network='default'/>
	I0908 14:47:13.879090 1161065 main.go:141] libmachine: (no-preload-301894)       <model type='virtio'/>
	I0908 14:47:13.879100 1161065 main.go:141] libmachine: (no-preload-301894)     </interface>
	I0908 14:47:13.879110 1161065 main.go:141] libmachine: (no-preload-301894)     <serial type='pty'>
	I0908 14:47:13.879122 1161065 main.go:141] libmachine: (no-preload-301894)       <target port='0'/>
	I0908 14:47:13.879133 1161065 main.go:141] libmachine: (no-preload-301894)     </serial>
	I0908 14:47:13.879143 1161065 main.go:141] libmachine: (no-preload-301894)     <console type='pty'>
	I0908 14:47:13.879153 1161065 main.go:141] libmachine: (no-preload-301894)       <target type='serial' port='0'/>
	I0908 14:47:13.879160 1161065 main.go:141] libmachine: (no-preload-301894)     </console>
	I0908 14:47:13.879165 1161065 main.go:141] libmachine: (no-preload-301894)     <rng model='virtio'>
	I0908 14:47:13.879173 1161065 main.go:141] libmachine: (no-preload-301894)       <backend model='random'>/dev/random</backend>
	I0908 14:47:13.879181 1161065 main.go:141] libmachine: (no-preload-301894)     </rng>
	I0908 14:47:13.879201 1161065 main.go:141] libmachine: (no-preload-301894)     
	I0908 14:47:13.879210 1161065 main.go:141] libmachine: (no-preload-301894)     
	I0908 14:47:13.879231 1161065 main.go:141] libmachine: (no-preload-301894)   </devices>
	I0908 14:47:13.879251 1161065 main.go:141] libmachine: (no-preload-301894) </domain>
	I0908 14:47:13.879284 1161065 main.go:141] libmachine: (no-preload-301894) 
	I0908 14:47:13.884517 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:fd:a3:0d in network default
	I0908 14:47:13.885269 1161065 main.go:141] libmachine: (no-preload-301894) starting domain...
	I0908 14:47:13.885298 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:13.885316 1161065 main.go:141] libmachine: (no-preload-301894) ensuring networks are active...
	I0908 14:47:13.886202 1161065 main.go:141] libmachine: (no-preload-301894) Ensuring network default is active
	I0908 14:47:13.886570 1161065 main.go:141] libmachine: (no-preload-301894) Ensuring network mk-no-preload-301894 is active
	I0908 14:47:13.887171 1161065 main.go:141] libmachine: (no-preload-301894) getting domain XML...
	I0908 14:47:13.888178 1161065 main.go:141] libmachine: (no-preload-301894) creating domain...
	I0908 14:47:14.279275 1161065 main.go:141] libmachine: (no-preload-301894) waiting for IP...
	I0908 14:47:14.280366 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:14.280906 1161065 main.go:141] libmachine: (no-preload-301894) DBG | unable to find current IP address of domain no-preload-301894 in network mk-no-preload-301894
	I0908 14:47:14.280940 1161065 main.go:141] libmachine: (no-preload-301894) DBG | I0908 14:47:14.280885 1161595 retry.go:31] will retry after 299.887118ms: waiting for domain to come up
	I0908 14:47:14.582745 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:14.583325 1161065 main.go:141] libmachine: (no-preload-301894) DBG | unable to find current IP address of domain no-preload-301894 in network mk-no-preload-301894
	I0908 14:47:14.583356 1161065 main.go:141] libmachine: (no-preload-301894) DBG | I0908 14:47:14.583297 1161595 retry.go:31] will retry after 249.657328ms: waiting for domain to come up
	I0908 14:47:14.834783 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:14.835389 1161065 main.go:141] libmachine: (no-preload-301894) DBG | unable to find current IP address of domain no-preload-301894 in network mk-no-preload-301894
	I0908 14:47:14.835426 1161065 main.go:141] libmachine: (no-preload-301894) DBG | I0908 14:47:14.835339 1161595 retry.go:31] will retry after 436.07914ms: waiting for domain to come up
	I0908 14:47:15.273234 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:15.273849 1161065 main.go:141] libmachine: (no-preload-301894) DBG | unable to find current IP address of domain no-preload-301894 in network mk-no-preload-301894
	I0908 14:47:15.273905 1161065 main.go:141] libmachine: (no-preload-301894) DBG | I0908 14:47:15.273842 1161595 retry.go:31] will retry after 388.986383ms: waiting for domain to come up
	I0908 14:47:15.664745 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:15.665480 1161065 main.go:141] libmachine: (no-preload-301894) DBG | unable to find current IP address of domain no-preload-301894 in network mk-no-preload-301894
	I0908 14:47:15.665516 1161065 main.go:141] libmachine: (no-preload-301894) DBG | I0908 14:47:15.665454 1161595 retry.go:31] will retry after 697.087111ms: waiting for domain to come up
	I0908 14:47:16.364223 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:16.364917 1161065 main.go:141] libmachine: (no-preload-301894) DBG | unable to find current IP address of domain no-preload-301894 in network mk-no-preload-301894
	I0908 14:47:16.364953 1161065 main.go:141] libmachine: (no-preload-301894) DBG | I0908 14:47:16.364892 1161595 retry.go:31] will retry after 932.556534ms: waiting for domain to come up
	I0908 14:47:15.230993 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetIP
	I0908 14:47:15.234315 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:15.234723 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:56:ae", ip: ""} in network mk-old-k8s-version-454279: {Iface:virbr3 ExpiryTime:2025-09-08 15:47:00 +0000 UTC Type:0 Mac:52:54:00:78:56:ae Iaid: IPaddr:192.168.50.48 Prefix:24 Hostname:old-k8s-version-454279 Clientid:01:52:54:00:78:56:ae}
	I0908 14:47:15.234760 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined IP address 192.168.50.48 and MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:15.234980 1160669 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0908 14:47:15.240407 1160669 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 14:47:15.259093 1160669 kubeadm.go:875] updating cluster {Name:old-k8s-version-454279 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.0 ClusterName:old-k8s-version-454279 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.48 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0908 14:47:15.259235 1160669 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0908 14:47:15.259281 1160669 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 14:47:15.301882 1160669 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.0". assuming images are not preloaded.
	I0908 14:47:15.301959 1160669 ssh_runner.go:195] Run: which lz4
	I0908 14:47:15.307335 1160669 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0908 14:47:15.313251 1160669 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0908 14:47:15.313305 1160669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457056555 bytes)
	I0908 14:47:17.472852 1160669 crio.go:462] duration metric: took 2.165558075s to copy over tarball
	I0908 14:47:17.472961 1160669 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0908 14:47:19.628401 1160669 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.155410467s)
	I0908 14:47:19.628432 1160669 crio.go:469] duration metric: took 2.155544498s to extract the tarball
	I0908 14:47:19.628440 1160669 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0908 14:47:19.675281 1160669 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 14:47:19.727396 1160669 crio.go:514] all images are preloaded for cri-o runtime.
	I0908 14:47:19.727429 1160669 cache_images.go:85] Images are preloaded, skipping loading
	I0908 14:47:19.727440 1160669 kubeadm.go:926] updating node { 192.168.50.48 8443 v1.28.0 crio true true} ...
	I0908 14:47:19.727610 1160669 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=old-k8s-version-454279 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.48
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-454279 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0908 14:47:19.727733 1160669 ssh_runner.go:195] Run: crio config
	I0908 14:47:19.779753 1160669 cni.go:84] Creating CNI manager for ""
	I0908 14:47:19.779853 1160669 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0908 14:47:19.779879 1160669 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0908 14:47:19.779945 1160669 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.48 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-454279 NodeName:old-k8s-version-454279 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0908 14:47:19.780270 1160669 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.48
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-454279"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.48
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.48"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0908 14:47:19.780390 1160669 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I0908 14:47:19.793661 1160669 binaries.go:44] Found k8s binaries, skipping transfer
	I0908 14:47:19.793772 1160669 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0908 14:47:19.806295 1160669 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0908 14:47:19.830043 1160669 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0908 14:47:19.854231 1160669 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I0908 14:47:19.877290 1160669 ssh_runner.go:195] Run: grep 192.168.50.48	control-plane.minikube.internal$ /etc/hosts
	I0908 14:47:19.882225 1160669 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.48	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 14:47:19.898708 1160669 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 14:47:20.072508 1160669 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 14:47:20.115151 1160669 certs.go:68] Setting up /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279 for IP: 192.168.50.48
	I0908 14:47:20.115180 1160669 certs.go:194] generating shared ca certs ...
	I0908 14:47:20.115201 1160669 certs.go:226] acquiring lock for ca certs: {Name:mk10dcd85eee4d8b0413bd848f61156bf964b6ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:47:20.115429 1160669 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21508-1116714/.minikube/ca.key
	I0908 14:47:20.115510 1160669 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21508-1116714/.minikube/proxy-client-ca.key
	I0908 14:47:20.115532 1160669 certs.go:256] generating profile certs ...
	I0908 14:47:20.115621 1160669 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/client.key
	I0908 14:47:20.115645 1160669 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/client.crt with IP's: []
	I0908 14:47:20.293700 1160669 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/client.crt ...
	I0908 14:47:20.293741 1160669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/client.crt: {Name:mk206ca7f18f3cdbac0fc6bdbd1f7a44a1300b83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:47:20.293963 1160669 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/client.key ...
	I0908 14:47:20.293983 1160669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/client.key: {Name:mk2f6e6e643bf72cd3b7e7fd62b6e0345a3d0b4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:47:20.294237 1160669 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/apiserver.key.ed44818c
	I0908 14:47:20.294268 1160669 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/apiserver.crt.ed44818c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.48]
	I0908 14:47:20.334022 1160669 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/apiserver.crt.ed44818c ...
	I0908 14:47:20.334063 1160669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/apiserver.crt.ed44818c: {Name:mka3682baa7d5ffca313ea6762fc49d2c8e24276 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:47:20.334247 1160669 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/apiserver.key.ed44818c ...
	I0908 14:47:20.334264 1160669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/apiserver.key.ed44818c: {Name:mk503685508fa39889cb4dda79781df5950a1ee8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:47:20.334366 1160669 certs.go:381] copying /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/apiserver.crt.ed44818c -> /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/apiserver.crt
	I0908 14:47:20.334483 1160669 certs.go:385] copying /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/apiserver.key.ed44818c -> /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/apiserver.key
	I0908 14:47:20.334579 1160669 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/proxy-client.key
	I0908 14:47:20.334609 1160669 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/proxy-client.crt with IP's: []
	I0908 14:47:20.583404 1160669 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/proxy-client.crt ...
	I0908 14:47:20.583440 1160669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/proxy-client.crt: {Name:mk3dfdd9b5abba8bdc7d1a726f96ef5fb2519b7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:47:20.583668 1160669 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/proxy-client.key ...
	I0908 14:47:20.583686 1160669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/proxy-client.key: {Name:mkf3c224e3a6d70be668ea603104347ec1607f19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:47:20.583890 1160669 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/1120875.pem (1338 bytes)
	W0908 14:47:20.583949 1160669 certs.go:480] ignoring /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/1120875_empty.pem, impossibly tiny 0 bytes
	I0908 14:47:20.583966 1160669 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca-key.pem (1679 bytes)
	I0908 14:47:20.584008 1160669 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca.pem (1082 bytes)
	I0908 14:47:20.584051 1160669 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/cert.pem (1123 bytes)
	I0908 14:47:20.584090 1160669 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/key.pem (1675 bytes)
	I0908 14:47:20.584150 1160669 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-1116714/.minikube/files/etc/ssl/certs/11208752.pem (1708 bytes)
	I0908 14:47:20.584833 1160669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0908 14:47:20.619765 1160669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0908 14:47:20.656093 1160669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0908 14:47:20.690850 1160669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0908 14:47:20.725652 1160669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0908 14:47:20.762769 1160669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0908 14:47:20.798926 1160669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0908 14:47:20.839948 1160669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0908 14:47:20.887765 1160669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/files/etc/ssl/certs/11208752.pem --> /usr/share/ca-certificates/11208752.pem (1708 bytes)
	I0908 14:47:20.923331 1160669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0908 14:47:20.959241 1160669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/1120875.pem --> /usr/share/ca-certificates/1120875.pem (1338 bytes)
	I0908 14:47:20.993186 1160669 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0908 14:47:21.017923 1160669 ssh_runner.go:195] Run: openssl version
	I0908 14:47:21.025582 1160669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0908 14:47:21.040260 1160669 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0908 14:47:21.046850 1160669 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  8 13:35 /usr/share/ca-certificates/minikubeCA.pem
	I0908 14:47:21.046933 1160669 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0908 14:47:21.056488 1160669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0908 14:47:21.072758 1160669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1120875.pem && ln -fs /usr/share/ca-certificates/1120875.pem /etc/ssl/certs/1120875.pem"
	I0908 14:47:21.089250 1160669 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1120875.pem
	I0908 14:47:21.097570 1160669 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  8 13:46 /usr/share/ca-certificates/1120875.pem
	I0908 14:47:21.097654 1160669 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1120875.pem
	I0908 14:47:21.109209 1160669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1120875.pem /etc/ssl/certs/51391683.0"
	I0908 14:47:21.124664 1160669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11208752.pem && ln -fs /usr/share/ca-certificates/11208752.pem /etc/ssl/certs/11208752.pem"
	I0908 14:47:21.145131 1160669 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11208752.pem
	I0908 14:47:21.151534 1160669 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  8 13:46 /usr/share/ca-certificates/11208752.pem
	I0908 14:47:21.151623 1160669 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11208752.pem
	I0908 14:47:21.160393 1160669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11208752.pem /etc/ssl/certs/3ec20f2e.0"
	I0908 14:47:21.176948 1160669 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0908 14:47:21.183123 1160669 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0908 14:47:21.183194 1160669 kubeadm.go:392] StartCluster: {Name:old-k8s-version-454279 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.0 ClusterName:old-k8s-version-454279 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.48 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 14:47:21.183299 1160669 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0908 14:47:21.183369 1160669 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0908 14:47:21.234235 1160669 cri.go:89] found id: ""
	I0908 14:47:21.234343 1160669 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0908 14:47:21.248156 1160669 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0908 14:47:21.264544 1160669 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0908 14:47:21.279114 1160669 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0908 14:47:21.279139 1160669 kubeadm.go:157] found existing configuration files:
	
	I0908 14:47:21.279216 1160669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0908 14:47:21.295113 1160669 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0908 14:47:21.295199 1160669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0908 14:47:21.311854 1160669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0908 14:47:21.326337 1160669 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0908 14:47:21.326413 1160669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0908 14:47:21.340653 1160669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0908 14:47:21.354985 1160669 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0908 14:47:21.355081 1160669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0908 14:47:21.369972 1160669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0908 14:47:21.385749 1160669 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0908 14:47:21.385834 1160669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0908 14:47:21.400297 1160669 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0908 14:47:21.469564 1160669 kubeadm.go:310] [init] Using Kubernetes version: v1.28.0
	I0908 14:47:21.469626 1160669 kubeadm.go:310] [preflight] Running pre-flight checks
	I0908 14:47:21.629916 1160669 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0908 14:47:21.630068 1160669 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0908 14:47:21.630197 1160669 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0908 14:47:21.876317 1160669 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0908 14:47:17.299958 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:17.300515 1161065 main.go:141] libmachine: (no-preload-301894) DBG | unable to find current IP address of domain no-preload-301894 in network mk-no-preload-301894
	I0908 14:47:17.300546 1161065 main.go:141] libmachine: (no-preload-301894) DBG | I0908 14:47:17.300489 1161595 retry.go:31] will retry after 873.277523ms: waiting for domain to come up
	I0908 14:47:18.175055 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:18.175479 1161065 main.go:141] libmachine: (no-preload-301894) DBG | unable to find current IP address of domain no-preload-301894 in network mk-no-preload-301894
	I0908 14:47:18.175543 1161065 main.go:141] libmachine: (no-preload-301894) DBG | I0908 14:47:18.175449 1161595 retry.go:31] will retry after 1.230605044s: waiting for domain to come up
	I0908 14:47:19.408231 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:19.408892 1161065 main.go:141] libmachine: (no-preload-301894) DBG | unable to find current IP address of domain no-preload-301894 in network mk-no-preload-301894
	I0908 14:47:19.408972 1161065 main.go:141] libmachine: (no-preload-301894) DBG | I0908 14:47:19.408874 1161595 retry.go:31] will retry after 1.41166106s: waiting for domain to come up
	I0908 14:47:20.822687 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:20.823353 1161065 main.go:141] libmachine: (no-preload-301894) DBG | unable to find current IP address of domain no-preload-301894 in network mk-no-preload-301894
	I0908 14:47:20.823388 1161065 main.go:141] libmachine: (no-preload-301894) DBG | I0908 14:47:20.823291 1161595 retry.go:31] will retry after 1.869801403s: waiting for domain to come up
	I0908 14:47:21.994887 1160669 out.go:252]   - Generating certificates and keys ...
	I0908 14:47:21.995014 1160669 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0908 14:47:21.995109 1160669 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0908 14:47:22.063280 1160669 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0908 14:47:22.401923 1160669 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0908 14:47:22.676005 1160669 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0908 14:47:22.728731 1160669 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0908 14:47:23.071243 1160669 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0908 14:47:23.071579 1160669 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-454279] and IPs [192.168.50.48 127.0.0.1 ::1]
	I0908 14:47:23.563705 1160669 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0908 14:47:23.563931 1160669 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-454279] and IPs [192.168.50.48 127.0.0.1 ::1]
	I0908 14:47:23.759378 1160669 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0908 14:47:24.010383 1160669 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0908 14:47:24.263976 1160669 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0908 14:47:24.265614 1160669 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0908 14:47:24.463358 1160669 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0908 14:47:24.675739 1160669 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0908 14:47:24.953446 1160669 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0908 14:47:25.072515 1160669 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0908 14:47:25.072999 1160669 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0908 14:47:25.075705 1160669 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0908 14:47:22.695776 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:22.696339 1161065 main.go:141] libmachine: (no-preload-301894) DBG | unable to find current IP address of domain no-preload-301894 in network mk-no-preload-301894
	I0908 14:47:22.696364 1161065 main.go:141] libmachine: (no-preload-301894) DBG | I0908 14:47:22.696301 1161595 retry.go:31] will retry after 2.848523465s: waiting for domain to come up
	I0908 14:47:25.546633 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:25.547260 1161065 main.go:141] libmachine: (no-preload-301894) DBG | unable to find current IP address of domain no-preload-301894 in network mk-no-preload-301894
	I0908 14:47:25.547300 1161065 main.go:141] libmachine: (no-preload-301894) DBG | I0908 14:47:25.547216 1161595 retry.go:31] will retry after 3.223127324s: waiting for domain to come up
	I0908 14:47:25.078393 1160669 out.go:252]   - Booting up control plane ...
	I0908 14:47:25.078534 1160669 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0908 14:47:25.078627 1160669 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0908 14:47:25.078715 1160669 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0908 14:47:25.112469 1160669 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0908 14:47:25.113621 1160669 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0908 14:47:25.113770 1160669 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0908 14:47:25.322425 1160669 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0908 14:47:28.772513 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:28.773199 1161065 main.go:141] libmachine: (no-preload-301894) DBG | unable to find current IP address of domain no-preload-301894 in network mk-no-preload-301894
	I0908 14:47:28.773274 1161065 main.go:141] libmachine: (no-preload-301894) DBG | I0908 14:47:28.773158 1161595 retry.go:31] will retry after 3.561518321s: waiting for domain to come up
	I0908 14:47:31.822840 1160669 kubeadm.go:310] [apiclient] All control plane components are healthy after 6.503403 seconds
	I0908 14:47:31.822974 1160669 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0908 14:47:31.843405 1160669 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0908 14:47:32.384853 1160669 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0908 14:47:32.385121 1160669 kubeadm.go:310] [mark-control-plane] Marking the node old-k8s-version-454279 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0908 14:47:32.899251 1160669 kubeadm.go:310] [bootstrap-token] Using token: qk5t9l.4qbiul1i99fdbzyv
	I0908 14:47:32.900654 1160669 out.go:252]   - Configuring RBAC rules ...
	I0908 14:47:32.900828 1160669 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0908 14:47:32.910356 1160669 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0908 14:47:32.922202 1160669 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0908 14:47:32.925892 1160669 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0908 14:47:32.935405 1160669 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0908 14:47:32.940669 1160669 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0908 14:47:32.959374 1160669 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0908 14:47:33.268156 1160669 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0908 14:47:33.335840 1160669 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0908 14:47:33.338648 1160669 kubeadm.go:310] 
	I0908 14:47:33.338750 1160669 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0908 14:47:33.338764 1160669 kubeadm.go:310] 
	I0908 14:47:33.338898 1160669 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0908 14:47:33.338921 1160669 kubeadm.go:310] 
	I0908 14:47:33.338943 1160669 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0908 14:47:33.339049 1160669 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0908 14:47:33.339152 1160669 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0908 14:47:33.339184 1160669 kubeadm.go:310] 
	I0908 14:47:33.339256 1160669 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0908 14:47:33.339265 1160669 kubeadm.go:310] 
	I0908 14:47:33.339367 1160669 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0908 14:47:33.339388 1160669 kubeadm.go:310] 
	I0908 14:47:33.339469 1160669 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0908 14:47:33.339580 1160669 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0908 14:47:33.339725 1160669 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0908 14:47:33.339745 1160669 kubeadm.go:310] 
	I0908 14:47:33.339881 1160669 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0908 14:47:33.339961 1160669 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0908 14:47:33.339968 1160669 kubeadm.go:310] 
	I0908 14:47:33.340077 1160669 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token qk5t9l.4qbiul1i99fdbzyv \
	I0908 14:47:33.340195 1160669 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b74fdb5b49b8a5f2d0d805722ad58fb11edbe1ed30e10a54ed528060545c93ba \
	I0908 14:47:33.340226 1160669 kubeadm.go:310] 	--control-plane 
	I0908 14:47:33.340235 1160669 kubeadm.go:310] 
	I0908 14:47:33.340367 1160669 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0908 14:47:33.340382 1160669 kubeadm.go:310] 
	I0908 14:47:33.340461 1160669 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token qk5t9l.4qbiul1i99fdbzyv \
	I0908 14:47:33.340556 1160669 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b74fdb5b49b8a5f2d0d805722ad58fb11edbe1ed30e10a54ed528060545c93ba 
	I0908 14:47:33.343383 1160669 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0908 14:47:33.343423 1160669 cni.go:84] Creating CNI manager for ""
	I0908 14:47:33.343431 1160669 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0908 14:47:33.346086 1160669 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0908 14:47:33.347584 1160669 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0908 14:47:32.339013 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:32.339637 1161065 main.go:141] libmachine: (no-preload-301894) DBG | unable to find current IP address of domain no-preload-301894 in network mk-no-preload-301894
	I0908 14:47:32.339691 1161065 main.go:141] libmachine: (no-preload-301894) DBG | I0908 14:47:32.339583 1161595 retry.go:31] will retry after 4.732018081s: waiting for domain to come up
	I0908 14:47:37.073055 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:37.073619 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has current primary IP address 192.168.39.135 and MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:37.073642 1161065 main.go:141] libmachine: (no-preload-301894) found domain IP: 192.168.39.135
	I0908 14:47:37.073661 1161065 main.go:141] libmachine: (no-preload-301894) reserving static IP address...
	I0908 14:47:37.074002 1161065 main.go:141] libmachine: (no-preload-301894) DBG | unable to find host DHCP lease matching {name: "no-preload-301894", mac: "52:54:00:d6:d3:58", ip: "192.168.39.135"} in network mk-no-preload-301894
	I0908 14:47:33.383475 1160669 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0908 14:47:33.454688 1160669 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0908 14:47:33.454771 1160669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 14:47:33.454806 1160669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-454279 minikube.k8s.io/updated_at=2025_09_08T14_47_33_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=3f6dd380c737091fd766d425b85ffa6c4f72b9ba minikube.k8s.io/name=old-k8s-version-454279 minikube.k8s.io/primary=true
	I0908 14:47:33.720581 1160669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 14:47:33.736508 1160669 ops.go:34] apiserver oom_adj: -16
	I0908 14:47:34.221515 1160669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 14:47:34.721604 1160669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 14:47:35.221601 1160669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 14:47:35.721498 1160669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 14:47:36.220816 1160669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 14:47:36.720968 1160669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 14:47:37.221044 1160669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 14:47:37.721319 1160669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 14:47:38.220737 1160669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 14:47:38.934186 1161261 start.go:364] duration metric: took 43.636529867s to acquireMachinesLock for "pause-120061"
	I0908 14:47:38.934281 1161261 start.go:96] Skipping create...Using existing machine configuration
	I0908 14:47:38.934293 1161261 fix.go:54] fixHost starting: 
	I0908 14:47:38.934795 1161261 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 14:47:38.934865 1161261 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 14:47:38.953899 1161261 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33119
	I0908 14:47:38.954585 1161261 main.go:141] libmachine: () Calling .GetVersion
	I0908 14:47:38.955180 1161261 main.go:141] libmachine: Using API Version  1
	I0908 14:47:38.955214 1161261 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 14:47:38.955734 1161261 main.go:141] libmachine: () Calling .GetMachineName
	I0908 14:47:38.955978 1161261 main.go:141] libmachine: (pause-120061) Calling .DriverName
	I0908 14:47:38.956209 1161261 main.go:141] libmachine: (pause-120061) Calling .GetState
	I0908 14:47:38.958177 1161261 fix.go:112] recreateIfNeeded on pause-120061: state=Running err=<nil>
	W0908 14:47:38.958231 1161261 fix.go:138] unexpected machine state, will restart: <nil>
	I0908 14:47:38.960278 1161261 out.go:252] * Updating the running kvm2 "pause-120061" VM ...
	I0908 14:47:38.960324 1161261 machine.go:93] provisionDockerMachine start ...
	I0908 14:47:38.960364 1161261 main.go:141] libmachine: (pause-120061) Calling .DriverName
	I0908 14:47:38.960695 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHHostname
	I0908 14:47:38.964020 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:38.964583 1161261 main.go:141] libmachine: (pause-120061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:49:28", ip: ""} in network mk-pause-120061: {Iface:virbr1 ExpiryTime:2025-09-08 15:45:41 +0000 UTC Type:0 Mac:52:54:00:a0:49:28 Iaid: IPaddr:192.168.61.147 Prefix:24 Hostname:pause-120061 Clientid:01:52:54:00:a0:49:28}
	I0908 14:47:38.964624 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined IP address 192.168.61.147 and MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:38.964874 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHPort
	I0908 14:47:38.965165 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHKeyPath
	I0908 14:47:38.965375 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHKeyPath
	I0908 14:47:38.965541 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHUsername
	I0908 14:47:38.965701 1161261 main.go:141] libmachine: Using SSH client type: native
	I0908 14:47:38.966030 1161261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.61.147 22 <nil> <nil>}
	I0908 14:47:38.966048 1161261 main.go:141] libmachine: About to run SSH command:
	hostname
	I0908 14:47:39.087038 1161261 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-120061
	
	I0908 14:47:39.087094 1161261 main.go:141] libmachine: (pause-120061) Calling .GetMachineName
	I0908 14:47:39.087412 1161261 buildroot.go:166] provisioning hostname "pause-120061"
	I0908 14:47:39.087435 1161261 main.go:141] libmachine: (pause-120061) Calling .GetMachineName
	I0908 14:47:39.087596 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHHostname
	I0908 14:47:39.091091 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:39.091719 1161261 main.go:141] libmachine: (pause-120061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:49:28", ip: ""} in network mk-pause-120061: {Iface:virbr1 ExpiryTime:2025-09-08 15:45:41 +0000 UTC Type:0 Mac:52:54:00:a0:49:28 Iaid: IPaddr:192.168.61.147 Prefix:24 Hostname:pause-120061 Clientid:01:52:54:00:a0:49:28}
	I0908 14:47:39.091743 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined IP address 192.168.61.147 and MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:39.092016 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHPort
	I0908 14:47:39.092297 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHKeyPath
	I0908 14:47:39.092524 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHKeyPath
	I0908 14:47:39.092745 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHUsername
	I0908 14:47:39.092990 1161261 main.go:141] libmachine: Using SSH client type: native
	I0908 14:47:39.093266 1161261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.61.147 22 <nil> <nil>}
	I0908 14:47:39.093281 1161261 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-120061 && echo "pause-120061" | sudo tee /etc/hostname
	I0908 14:47:39.231080 1161261 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-120061
	
	I0908 14:47:39.231115 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHHostname
	I0908 14:47:39.234280 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:39.234692 1161261 main.go:141] libmachine: (pause-120061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:49:28", ip: ""} in network mk-pause-120061: {Iface:virbr1 ExpiryTime:2025-09-08 15:45:41 +0000 UTC Type:0 Mac:52:54:00:a0:49:28 Iaid: IPaddr:192.168.61.147 Prefix:24 Hostname:pause-120061 Clientid:01:52:54:00:a0:49:28}
	I0908 14:47:39.234735 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined IP address 192.168.61.147 and MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:39.234995 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHPort
	I0908 14:47:39.235241 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHKeyPath
	I0908 14:47:39.235419 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHKeyPath
	I0908 14:47:39.235543 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHUsername
	I0908 14:47:39.235743 1161261 main.go:141] libmachine: Using SSH client type: native
	I0908 14:47:39.235953 1161261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.61.147 22 <nil> <nil>}
	I0908 14:47:39.235969 1161261 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-120061' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-120061/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-120061' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 14:47:39.358526 1161261 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 14:47:39.358561 1161261 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21508-1116714/.minikube CaCertPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21508-1116714/.minikube}
	I0908 14:47:39.358630 1161261 buildroot.go:174] setting up certificates
	I0908 14:47:39.358646 1161261 provision.go:84] configureAuth start
	I0908 14:47:39.358662 1161261 main.go:141] libmachine: (pause-120061) Calling .GetMachineName
	I0908 14:47:39.359057 1161261 main.go:141] libmachine: (pause-120061) Calling .GetIP
	I0908 14:47:39.362365 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:39.362831 1161261 main.go:141] libmachine: (pause-120061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:49:28", ip: ""} in network mk-pause-120061: {Iface:virbr1 ExpiryTime:2025-09-08 15:45:41 +0000 UTC Type:0 Mac:52:54:00:a0:49:28 Iaid: IPaddr:192.168.61.147 Prefix:24 Hostname:pause-120061 Clientid:01:52:54:00:a0:49:28}
	I0908 14:47:39.362858 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined IP address 192.168.61.147 and MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:39.362995 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHHostname
	I0908 14:47:39.366014 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:39.366565 1161261 main.go:141] libmachine: (pause-120061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:49:28", ip: ""} in network mk-pause-120061: {Iface:virbr1 ExpiryTime:2025-09-08 15:45:41 +0000 UTC Type:0 Mac:52:54:00:a0:49:28 Iaid: IPaddr:192.168.61.147 Prefix:24 Hostname:pause-120061 Clientid:01:52:54:00:a0:49:28}
	I0908 14:47:39.366609 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined IP address 192.168.61.147 and MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:39.366788 1161261 provision.go:143] copyHostCerts
	I0908 14:47:39.366878 1161261 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-1116714/.minikube/ca.pem, removing ...
	I0908 14:47:39.366900 1161261 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-1116714/.minikube/ca.pem
	I0908 14:47:39.366971 1161261 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21508-1116714/.minikube/ca.pem (1082 bytes)
	I0908 14:47:39.367120 1161261 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-1116714/.minikube/cert.pem, removing ...
	I0908 14:47:39.367134 1161261 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-1116714/.minikube/cert.pem
	I0908 14:47:39.367165 1161261 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21508-1116714/.minikube/cert.pem (1123 bytes)
	I0908 14:47:39.367258 1161261 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-1116714/.minikube/key.pem, removing ...
	I0908 14:47:39.367269 1161261 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-1116714/.minikube/key.pem
	I0908 14:47:39.367297 1161261 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21508-1116714/.minikube/key.pem (1675 bytes)
	I0908 14:47:39.367390 1161261 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca-key.pem org=jenkins.pause-120061 san=[127.0.0.1 192.168.61.147 localhost minikube pause-120061]
	I0908 14:47:39.573674 1161261 provision.go:177] copyRemoteCerts
	I0908 14:47:39.573751 1161261 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 14:47:39.573781 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHHostname
	I0908 14:47:39.577127 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:39.577650 1161261 main.go:141] libmachine: (pause-120061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:49:28", ip: ""} in network mk-pause-120061: {Iface:virbr1 ExpiryTime:2025-09-08 15:45:41 +0000 UTC Type:0 Mac:52:54:00:a0:49:28 Iaid: IPaddr:192.168.61.147 Prefix:24 Hostname:pause-120061 Clientid:01:52:54:00:a0:49:28}
	I0908 14:47:39.577687 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined IP address 192.168.61.147 and MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:39.577836 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHPort
	I0908 14:47:39.578123 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHKeyPath
	I0908 14:47:39.578302 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHUsername
	I0908 14:47:39.578501 1161261 sshutil.go:53] new ssh client: &{IP:192.168.61.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/pause-120061/id_rsa Username:docker}
	I0908 14:47:39.678101 1161261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0908 14:47:39.716835 1161261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0908 14:47:39.765726 1161261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0908 14:47:39.813075 1161261 provision.go:87] duration metric: took 454.409899ms to configureAuth
	I0908 14:47:39.813115 1161261 buildroot.go:189] setting minikube options for container-runtime
	I0908 14:47:39.813416 1161261 config.go:182] Loaded profile config "pause-120061": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 14:47:39.813522 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHHostname
	I0908 14:47:39.816873 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:39.817323 1161261 main.go:141] libmachine: (pause-120061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:49:28", ip: ""} in network mk-pause-120061: {Iface:virbr1 ExpiryTime:2025-09-08 15:45:41 +0000 UTC Type:0 Mac:52:54:00:a0:49:28 Iaid: IPaddr:192.168.61.147 Prefix:24 Hostname:pause-120061 Clientid:01:52:54:00:a0:49:28}
	I0908 14:47:39.817356 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined IP address 192.168.61.147 and MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:39.817651 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHPort
	I0908 14:47:39.817919 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHKeyPath
	I0908 14:47:39.818144 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHKeyPath
	I0908 14:47:39.818328 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHUsername
	I0908 14:47:39.818555 1161261 main.go:141] libmachine: Using SSH client type: native
	I0908 14:47:39.818896 1161261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.61.147 22 <nil> <nil>}
	I0908 14:47:39.818913 1161261 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0908 14:47:37.173028 1161065 main.go:141] libmachine: (no-preload-301894) reserved static IP address 192.168.39.135 for domain no-preload-301894
	I0908 14:47:37.173058 1161065 main.go:141] libmachine: (no-preload-301894) waiting for SSH...
	I0908 14:47:37.173117 1161065 main.go:141] libmachine: (no-preload-301894) DBG | Getting to WaitForSSH function...
	I0908 14:47:37.176590 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:37.177193 1161065 main.go:141] libmachine: (no-preload-301894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:d3:58", ip: ""} in network mk-no-preload-301894: {Iface:virbr2 ExpiryTime:2025-09-08 15:47:29 +0000 UTC Type:0 Mac:52:54:00:d6:d3:58 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d6:d3:58}
	I0908 14:47:37.177248 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined IP address 192.168.39.135 and MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:37.177372 1161065 main.go:141] libmachine: (no-preload-301894) DBG | Using SSH client type: external
	I0908 14:47:37.177396 1161065 main.go:141] libmachine: (no-preload-301894) DBG | Using SSH private key: /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/no-preload-301894/id_rsa (-rw-------)
	I0908 14:47:37.177431 1161065 main.go:141] libmachine: (no-preload-301894) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.135 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/no-preload-301894/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0908 14:47:37.177445 1161065 main.go:141] libmachine: (no-preload-301894) DBG | About to run SSH command:
	I0908 14:47:37.177458 1161065 main.go:141] libmachine: (no-preload-301894) DBG | exit 0
	I0908 14:47:37.309120 1161065 main.go:141] libmachine: (no-preload-301894) DBG | SSH cmd err, output: <nil>: 
	I0908 14:47:37.309419 1161065 main.go:141] libmachine: (no-preload-301894) KVM machine creation complete
	I0908 14:47:37.309836 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetConfigRaw
	I0908 14:47:37.310480 1161065 main.go:141] libmachine: (no-preload-301894) Calling .DriverName
	I0908 14:47:37.310692 1161065 main.go:141] libmachine: (no-preload-301894) Calling .DriverName
	I0908 14:47:37.310909 1161065 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0908 14:47:37.310929 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetState
	I0908 14:47:37.312562 1161065 main.go:141] libmachine: Detecting operating system of created instance...
	I0908 14:47:37.312579 1161065 main.go:141] libmachine: Waiting for SSH to be available...
	I0908 14:47:37.312584 1161065 main.go:141] libmachine: Getting to WaitForSSH function...
	I0908 14:47:37.312589 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHHostname
	I0908 14:47:37.315694 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:37.316135 1161065 main.go:141] libmachine: (no-preload-301894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:d3:58", ip: ""} in network mk-no-preload-301894: {Iface:virbr2 ExpiryTime:2025-09-08 15:47:29 +0000 UTC Type:0 Mac:52:54:00:d6:d3:58 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:no-preload-301894 Clientid:01:52:54:00:d6:d3:58}
	I0908 14:47:37.316157 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined IP address 192.168.39.135 and MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:37.316356 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHPort
	I0908 14:47:37.316618 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHKeyPath
	I0908 14:47:37.316798 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHKeyPath
	I0908 14:47:37.316974 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHUsername
	I0908 14:47:37.317197 1161065 main.go:141] libmachine: Using SSH client type: native
	I0908 14:47:37.317455 1161065 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I0908 14:47:37.317468 1161065 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0908 14:47:37.435700 1161065 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 14:47:37.435729 1161065 main.go:141] libmachine: Detecting the provisioner...
	I0908 14:47:37.435738 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHHostname
	I0908 14:47:37.438619 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:37.439018 1161065 main.go:141] libmachine: (no-preload-301894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:d3:58", ip: ""} in network mk-no-preload-301894: {Iface:virbr2 ExpiryTime:2025-09-08 15:47:29 +0000 UTC Type:0 Mac:52:54:00:d6:d3:58 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:no-preload-301894 Clientid:01:52:54:00:d6:d3:58}
	I0908 14:47:37.439050 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined IP address 192.168.39.135 and MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:37.439250 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHPort
	I0908 14:47:37.439458 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHKeyPath
	I0908 14:47:37.439619 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHKeyPath
	I0908 14:47:37.439750 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHUsername
	I0908 14:47:37.439934 1161065 main.go:141] libmachine: Using SSH client type: native
	I0908 14:47:37.440183 1161065 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I0908 14:47:37.440196 1161065 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0908 14:47:37.557514 1161065 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I0908 14:47:37.557585 1161065 main.go:141] libmachine: found compatible host: buildroot
	I0908 14:47:37.557596 1161065 main.go:141] libmachine: Provisioning with buildroot...
	I0908 14:47:37.557608 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetMachineName
	I0908 14:47:37.557921 1161065 buildroot.go:166] provisioning hostname "no-preload-301894"
	I0908 14:47:37.557951 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetMachineName
	I0908 14:47:37.558207 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHHostname
	I0908 14:47:37.561160 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:37.561605 1161065 main.go:141] libmachine: (no-preload-301894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:d3:58", ip: ""} in network mk-no-preload-301894: {Iface:virbr2 ExpiryTime:2025-09-08 15:47:29 +0000 UTC Type:0 Mac:52:54:00:d6:d3:58 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:no-preload-301894 Clientid:01:52:54:00:d6:d3:58}
	I0908 14:47:37.561646 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined IP address 192.168.39.135 and MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:37.561784 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHPort
	I0908 14:47:37.561953 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHKeyPath
	I0908 14:47:37.562111 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHKeyPath
	I0908 14:47:37.562231 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHUsername
	I0908 14:47:37.562386 1161065 main.go:141] libmachine: Using SSH client type: native
	I0908 14:47:37.562602 1161065 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I0908 14:47:37.562615 1161065 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-301894 && echo "no-preload-301894" | sudo tee /etc/hostname
	I0908 14:47:37.701317 1161065 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-301894
	
	I0908 14:47:37.701351 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHHostname
	I0908 14:47:37.705258 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:37.705910 1161065 main.go:141] libmachine: (no-preload-301894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:d3:58", ip: ""} in network mk-no-preload-301894: {Iface:virbr2 ExpiryTime:2025-09-08 15:47:29 +0000 UTC Type:0 Mac:52:54:00:d6:d3:58 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:no-preload-301894 Clientid:01:52:54:00:d6:d3:58}
	I0908 14:47:37.705941 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined IP address 192.168.39.135 and MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:37.706206 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHPort
	I0908 14:47:37.706513 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHKeyPath
	I0908 14:47:37.706732 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHKeyPath
	I0908 14:47:37.706900 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHUsername
	I0908 14:47:37.707110 1161065 main.go:141] libmachine: Using SSH client type: native
	I0908 14:47:37.707366 1161065 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I0908 14:47:37.707387 1161065 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-301894' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-301894/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-301894' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 14:47:37.843925 1161065 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 14:47:37.843967 1161065 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21508-1116714/.minikube CaCertPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21508-1116714/.minikube}
	I0908 14:47:37.844021 1161065 buildroot.go:174] setting up certificates
	I0908 14:47:37.844040 1161065 provision.go:84] configureAuth start
	I0908 14:47:37.844058 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetMachineName
	I0908 14:47:37.844432 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetIP
	I0908 14:47:37.847479 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:37.847900 1161065 main.go:141] libmachine: (no-preload-301894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:d3:58", ip: ""} in network mk-no-preload-301894: {Iface:virbr2 ExpiryTime:2025-09-08 15:47:29 +0000 UTC Type:0 Mac:52:54:00:d6:d3:58 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:no-preload-301894 Clientid:01:52:54:00:d6:d3:58}
	I0908 14:47:37.847937 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined IP address 192.168.39.135 and MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:37.848127 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHHostname
	I0908 14:47:37.850510 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:37.850891 1161065 main.go:141] libmachine: (no-preload-301894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:d3:58", ip: ""} in network mk-no-preload-301894: {Iface:virbr2 ExpiryTime:2025-09-08 15:47:29 +0000 UTC Type:0 Mac:52:54:00:d6:d3:58 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:no-preload-301894 Clientid:01:52:54:00:d6:d3:58}
	I0908 14:47:37.850923 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined IP address 192.168.39.135 and MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:37.851078 1161065 provision.go:143] copyHostCerts
	I0908 14:47:37.851158 1161065 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-1116714/.minikube/ca.pem, removing ...
	I0908 14:47:37.851169 1161065 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-1116714/.minikube/ca.pem
	I0908 14:47:37.851221 1161065 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21508-1116714/.minikube/ca.pem (1082 bytes)
	I0908 14:47:37.851316 1161065 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-1116714/.minikube/cert.pem, removing ...
	I0908 14:47:37.851324 1161065 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-1116714/.minikube/cert.pem
	I0908 14:47:37.851351 1161065 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21508-1116714/.minikube/cert.pem (1123 bytes)
	I0908 14:47:37.851459 1161065 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-1116714/.minikube/key.pem, removing ...
	I0908 14:47:37.851469 1161065 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-1116714/.minikube/key.pem
	I0908 14:47:37.851487 1161065 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21508-1116714/.minikube/key.pem (1675 bytes)
	I0908 14:47:37.851533 1161065 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca-key.pem org=jenkins.no-preload-301894 san=[127.0.0.1 192.168.39.135 localhost minikube no-preload-301894]
	I0908 14:47:38.160932 1161065 provision.go:177] copyRemoteCerts
	I0908 14:47:38.161016 1161065 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 14:47:38.161048 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHHostname
	I0908 14:47:38.164089 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:38.164517 1161065 main.go:141] libmachine: (no-preload-301894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:d3:58", ip: ""} in network mk-no-preload-301894: {Iface:virbr2 ExpiryTime:2025-09-08 15:47:29 +0000 UTC Type:0 Mac:52:54:00:d6:d3:58 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:no-preload-301894 Clientid:01:52:54:00:d6:d3:58}
	I0908 14:47:38.164551 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined IP address 192.168.39.135 and MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:38.164706 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHPort
	I0908 14:47:38.164985 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHKeyPath
	I0908 14:47:38.165168 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHUsername
	I0908 14:47:38.165345 1161065 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/no-preload-301894/id_rsa Username:docker}
	I0908 14:47:38.257981 1161065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0908 14:47:38.295923 1161065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0908 14:47:38.333158 1161065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0908 14:47:38.368889 1161065 provision.go:87] duration metric: took 524.827415ms to configureAuth
	I0908 14:47:38.368930 1161065 buildroot.go:189] setting minikube options for container-runtime
	I0908 14:47:38.369177 1161065 config.go:182] Loaded profile config "no-preload-301894": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 14:47:38.369321 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHHostname
	I0908 14:47:38.372614 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:38.373020 1161065 main.go:141] libmachine: (no-preload-301894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:d3:58", ip: ""} in network mk-no-preload-301894: {Iface:virbr2 ExpiryTime:2025-09-08 15:47:29 +0000 UTC Type:0 Mac:52:54:00:d6:d3:58 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:no-preload-301894 Clientid:01:52:54:00:d6:d3:58}
	I0908 14:47:38.373052 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined IP address 192.168.39.135 and MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:38.373273 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHPort
	I0908 14:47:38.373499 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHKeyPath
	I0908 14:47:38.373686 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHKeyPath
	I0908 14:47:38.374213 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHUsername
	I0908 14:47:38.374555 1161065 main.go:141] libmachine: Using SSH client type: native
	I0908 14:47:38.374842 1161065 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I0908 14:47:38.374868 1161065 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0908 14:47:38.643745 1161065 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0908 14:47:38.643790 1161065 main.go:141] libmachine: Checking connection to Docker...
	I0908 14:47:38.643804 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetURL
	I0908 14:47:38.645360 1161065 main.go:141] libmachine: (no-preload-301894) DBG | using libvirt version 6000000
	I0908 14:47:38.648119 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:38.648477 1161065 main.go:141] libmachine: (no-preload-301894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:d3:58", ip: ""} in network mk-no-preload-301894: {Iface:virbr2 ExpiryTime:2025-09-08 15:47:29 +0000 UTC Type:0 Mac:52:54:00:d6:d3:58 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:no-preload-301894 Clientid:01:52:54:00:d6:d3:58}
	I0908 14:47:38.648511 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined IP address 192.168.39.135 and MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:38.648702 1161065 main.go:141] libmachine: Docker is up and running!
	I0908 14:47:38.648720 1161065 main.go:141] libmachine: Reticulating splines...
	I0908 14:47:38.648728 1161065 client.go:171] duration metric: took 25.388584474s to LocalClient.Create
	I0908 14:47:38.648755 1161065 start.go:167] duration metric: took 25.388655219s to libmachine.API.Create "no-preload-301894"
	I0908 14:47:38.648769 1161065 start.go:293] postStartSetup for "no-preload-301894" (driver="kvm2")
	I0908 14:47:38.648783 1161065 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 14:47:38.648812 1161065 main.go:141] libmachine: (no-preload-301894) Calling .DriverName
	I0908 14:47:38.649087 1161065 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 14:47:38.649117 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHHostname
	I0908 14:47:38.651965 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:38.652312 1161065 main.go:141] libmachine: (no-preload-301894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:d3:58", ip: ""} in network mk-no-preload-301894: {Iface:virbr2 ExpiryTime:2025-09-08 15:47:29 +0000 UTC Type:0 Mac:52:54:00:d6:d3:58 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:no-preload-301894 Clientid:01:52:54:00:d6:d3:58}
	I0908 14:47:38.652336 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined IP address 192.168.39.135 and MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:38.652604 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHPort
	I0908 14:47:38.652899 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHKeyPath
	I0908 14:47:38.653125 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHUsername
	I0908 14:47:38.653274 1161065 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/no-preload-301894/id_rsa Username:docker}
	I0908 14:47:38.745265 1161065 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 14:47:38.751059 1161065 info.go:137] Remote host: Buildroot 2025.02
	I0908 14:47:38.751101 1161065 filesync.go:126] Scanning /home/jenkins/minikube-integration/21508-1116714/.minikube/addons for local assets ...
	I0908 14:47:38.751203 1161065 filesync.go:126] Scanning /home/jenkins/minikube-integration/21508-1116714/.minikube/files for local assets ...
	I0908 14:47:38.751307 1161065 filesync.go:149] local asset: /home/jenkins/minikube-integration/21508-1116714/.minikube/files/etc/ssl/certs/11208752.pem -> 11208752.pem in /etc/ssl/certs
	I0908 14:47:38.751435 1161065 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0908 14:47:38.765567 1161065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/files/etc/ssl/certs/11208752.pem --> /etc/ssl/certs/11208752.pem (1708 bytes)
	I0908 14:47:38.800453 1161065 start.go:296] duration metric: took 151.664041ms for postStartSetup
	I0908 14:47:38.800524 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetConfigRaw
	I0908 14:47:38.801279 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetIP
	I0908 14:47:38.804637 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:38.804988 1161065 main.go:141] libmachine: (no-preload-301894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:d3:58", ip: ""} in network mk-no-preload-301894: {Iface:virbr2 ExpiryTime:2025-09-08 15:47:29 +0000 UTC Type:0 Mac:52:54:00:d6:d3:58 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:no-preload-301894 Clientid:01:52:54:00:d6:d3:58}
	I0908 14:47:38.805020 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined IP address 192.168.39.135 and MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:38.805405 1161065 profile.go:143] Saving config to /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/no-preload-301894/config.json ...
	I0908 14:47:38.805719 1161065 start.go:128] duration metric: took 25.571085913s to createHost
	I0908 14:47:38.805756 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHHostname
	I0908 14:47:38.809193 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:38.809675 1161065 main.go:141] libmachine: (no-preload-301894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:d3:58", ip: ""} in network mk-no-preload-301894: {Iface:virbr2 ExpiryTime:2025-09-08 15:47:29 +0000 UTC Type:0 Mac:52:54:00:d6:d3:58 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:no-preload-301894 Clientid:01:52:54:00:d6:d3:58}
	I0908 14:47:38.809706 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined IP address 192.168.39.135 and MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:38.809911 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHPort
	I0908 14:47:38.810166 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHKeyPath
	I0908 14:47:38.810333 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHKeyPath
	I0908 14:47:38.810546 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHUsername
	I0908 14:47:38.810747 1161065 main.go:141] libmachine: Using SSH client type: native
	I0908 14:47:38.810988 1161065 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I0908 14:47:38.811003 1161065 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0908 14:47:38.933919 1161065 main.go:141] libmachine: SSH cmd err, output: <nil>: 1757342858.916620128
	
	I0908 14:47:38.933949 1161065 fix.go:216] guest clock: 1757342858.916620128
	I0908 14:47:38.933960 1161065 fix.go:229] Guest: 2025-09-08 14:47:38.916620128 +0000 UTC Remote: 2025-09-08 14:47:38.805737661 +0000 UTC m=+56.712336294 (delta=110.882467ms)
	I0908 14:47:38.934034 1161065 fix.go:200] guest clock delta is within tolerance: 110.882467ms
	I0908 14:47:38.934047 1161065 start.go:83] releasing machines lock for "no-preload-301894", held for 25.699591066s
	I0908 14:47:38.934091 1161065 main.go:141] libmachine: (no-preload-301894) Calling .DriverName
	I0908 14:47:38.934420 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetIP
	I0908 14:47:38.937673 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:38.938123 1161065 main.go:141] libmachine: (no-preload-301894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:d3:58", ip: ""} in network mk-no-preload-301894: {Iface:virbr2 ExpiryTime:2025-09-08 15:47:29 +0000 UTC Type:0 Mac:52:54:00:d6:d3:58 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:no-preload-301894 Clientid:01:52:54:00:d6:d3:58}
	I0908 14:47:38.938158 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined IP address 192.168.39.135 and MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:38.938357 1161065 main.go:141] libmachine: (no-preload-301894) Calling .DriverName
	I0908 14:47:38.938943 1161065 main.go:141] libmachine: (no-preload-301894) Calling .DriverName
	I0908 14:47:38.939160 1161065 main.go:141] libmachine: (no-preload-301894) Calling .DriverName
	I0908 14:47:38.939267 1161065 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0908 14:47:38.939359 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHHostname
	I0908 14:47:38.939400 1161065 ssh_runner.go:195] Run: cat /version.json
	I0908 14:47:38.939433 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHHostname
	I0908 14:47:38.942714 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:38.942747 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:38.943190 1161065 main.go:141] libmachine: (no-preload-301894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:d3:58", ip: ""} in network mk-no-preload-301894: {Iface:virbr2 ExpiryTime:2025-09-08 15:47:29 +0000 UTC Type:0 Mac:52:54:00:d6:d3:58 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:no-preload-301894 Clientid:01:52:54:00:d6:d3:58}
	I0908 14:47:38.943250 1161065 main.go:141] libmachine: (no-preload-301894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:d3:58", ip: ""} in network mk-no-preload-301894: {Iface:virbr2 ExpiryTime:2025-09-08 15:47:29 +0000 UTC Type:0 Mac:52:54:00:d6:d3:58 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:no-preload-301894 Clientid:01:52:54:00:d6:d3:58}
	I0908 14:47:38.943274 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined IP address 192.168.39.135 and MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:38.943298 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined IP address 192.168.39.135 and MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:38.943680 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHPort
	I0908 14:47:38.943699 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHPort
	I0908 14:47:38.943921 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHKeyPath
	I0908 14:47:38.943922 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHKeyPath
	I0908 14:47:38.944143 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHUsername
	I0908 14:47:38.944148 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHUsername
	I0908 14:47:38.944354 1161065 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/no-preload-301894/id_rsa Username:docker}
	I0908 14:47:38.944358 1161065 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/no-preload-301894/id_rsa Username:docker}
	I0908 14:47:39.035120 1161065 ssh_runner.go:195] Run: systemctl --version
	I0908 14:47:39.062137 1161065 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0908 14:47:39.236708 1161065 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0908 14:47:39.246763 1161065 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0908 14:47:39.246858 1161065 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 14:47:39.270638 1161065 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0908 14:47:39.270676 1161065 start.go:495] detecting cgroup driver to use...
	I0908 14:47:39.270761 1161065 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0908 14:47:39.297655 1161065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 14:47:39.317784 1161065 docker.go:218] disabling cri-docker service (if available) ...
	I0908 14:47:39.317875 1161065 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0908 14:47:39.335086 1161065 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0908 14:47:39.354042 1161065 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0908 14:47:39.548548 1161065 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0908 14:47:39.712825 1161065 docker.go:234] disabling docker service ...
	I0908 14:47:39.712903 1161065 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0908 14:47:39.734928 1161065 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0908 14:47:39.755360 1161065 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0908 14:47:39.989247 1161065 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0908 14:47:40.143124 1161065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 14:47:40.161711 1161065 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 14:47:40.188459 1161065 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0908 14:47:40.188551 1161065 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 14:47:40.204138 1161065 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0908 14:47:40.204229 1161065 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 14:47:40.219098 1161065 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 14:47:40.233463 1161065 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 14:47:40.248559 1161065 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 14:47:40.264441 1161065 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 14:47:40.279123 1161065 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 14:47:40.305163 1161065 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 14:47:40.319616 1161065 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 14:47:40.332770 1161065 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0908 14:47:40.332859 1161065 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0908 14:47:40.355858 1161065 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 14:47:40.369794 1161065 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 14:47:40.520912 1161065 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0908 14:47:40.639497 1161065 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0908 14:47:40.639577 1161065 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0908 14:47:40.645350 1161065 start.go:563] Will wait 60s for crictl version
	I0908 14:47:40.645420 1161065 ssh_runner.go:195] Run: which crictl
	I0908 14:47:40.650328 1161065 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 14:47:40.697177 1161065 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0908 14:47:40.697287 1161065 ssh_runner.go:195] Run: crio --version
	I0908 14:47:40.730232 1161065 ssh_runner.go:195] Run: crio --version
	I0908 14:47:40.764916 1161065 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	I0908 14:47:40.766192 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetIP
	I0908 14:47:40.769070 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:40.769584 1161065 main.go:141] libmachine: (no-preload-301894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:d3:58", ip: ""} in network mk-no-preload-301894: {Iface:virbr2 ExpiryTime:2025-09-08 15:47:29 +0000 UTC Type:0 Mac:52:54:00:d6:d3:58 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:no-preload-301894 Clientid:01:52:54:00:d6:d3:58}
	I0908 14:47:40.769611 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined IP address 192.168.39.135 and MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:40.769912 1161065 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0908 14:47:40.777603 1161065 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 14:47:40.798773 1161065 kubeadm.go:875] updating cluster {Name:no-preload-301894 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.34.0 ClusterName:no-preload-301894 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.135 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0908 14:47:40.798946 1161065 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 14:47:40.798999 1161065 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 14:47:40.842242 1161065 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.0". assuming images are not preloaded.
	I0908 14:47:40.842279 1161065 cache_images.go:89] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.0 registry.k8s.io/kube-controller-manager:v1.34.0 registry.k8s.io/kube-scheduler:v1.34.0 registry.k8s.io/kube-proxy:v1.34.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0908 14:47:40.842343 1161065 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0908 14:47:40.842368 1161065 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.0
	I0908 14:47:40.842397 1161065 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I0908 14:47:40.842381 1161065 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I0908 14:47:40.842427 1161065 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.0
	I0908 14:47:40.842469 1161065 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I0908 14:47:40.842477 1161065 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.0
	I0908 14:47:40.842407 1161065 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.0
	I0908 14:47:40.843986 1161065 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.0
	I0908 14:47:40.843994 1161065 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I0908 14:47:40.844049 1161065 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I0908 14:47:40.844112 1161065 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I0908 14:47:40.843986 1161065 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.0
	I0908 14:47:40.844144 1161065 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0908 14:47:40.844195 1161065 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.0
	I0908 14:47:40.844204 1161065 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.0
	I0908 14:47:40.976211 1161065 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.0
	I0908 14:47:40.982227 1161065 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.0
	I0908 14:47:40.988896 1161065 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I0908 14:47:40.989992 1161065 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.0
	I0908 14:47:40.993290 1161065 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I0908 14:47:41.007920 1161065 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I0908 14:47:41.018316 1161065 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.0
	I0908 14:47:41.097150 1161065 cache_images.go:117] "registry.k8s.io/kube-apiserver:v1.34.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.0" does not exist at hash "90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90" in container runtime
	I0908 14:47:41.097220 1161065 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.0
	I0908 14:47:41.097294 1161065 ssh_runner.go:195] Run: which crictl
	I0908 14:47:41.153300 1161065 cache_images.go:117] "registry.k8s.io/kube-controller-manager:v1.34.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.0" does not exist at hash "a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634" in container runtime
	I0908 14:47:41.153361 1161065 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.0
	I0908 14:47:41.153423 1161065 ssh_runner.go:195] Run: which crictl
	I0908 14:47:41.200415 1161065 cache_images.go:117] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115" in container runtime
	I0908 14:47:41.200517 1161065 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I0908 14:47:41.200547 1161065 cache_images.go:117] "registry.k8s.io/kube-scheduler:v1.34.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.0" does not exist at hash "46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc" in container runtime
	I0908 14:47:41.200586 1161065 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.0
	I0908 14:47:41.200600 1161065 ssh_runner.go:195] Run: which crictl
	I0908 14:47:41.200640 1161065 ssh_runner.go:195] Run: which crictl
	I0908 14:47:41.200648 1161065 cache_images.go:117] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I0908 14:47:41.200689 1161065 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I0908 14:47:41.200735 1161065 ssh_runner.go:195] Run: which crictl
	I0908 14:47:41.210714 1161065 cache_images.go:117] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I0908 14:47:41.210784 1161065 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I0908 14:47:41.210841 1161065 ssh_runner.go:195] Run: which crictl
	I0908 14:47:41.215898 1161065 cache_images.go:117] "registry.k8s.io/kube-proxy:v1.34.0" needs transfer: "registry.k8s.io/kube-proxy:v1.34.0" does not exist at hash "df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce" in container runtime
	I0908 14:47:41.215928 1161065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.0
	I0908 14:47:41.215962 1161065 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.0
	I0908 14:47:41.215976 1161065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.0
	I0908 14:47:41.216015 1161065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I0908 14:47:41.216035 1161065 ssh_runner.go:195] Run: which crictl
	I0908 14:47:41.297696 1161065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.0
	I0908 14:47:41.297695 1161065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I0908 14:47:41.297793 1161065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I0908 14:47:41.297921 1161065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.0
	I0908 14:47:41.297946 1161065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I0908 14:47:41.298011 1161065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.0
	I0908 14:47:41.298054 1161065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.0
	I0908 14:47:41.425096 1161065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I0908 14:47:41.460502 1161065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I0908 14:47:41.489178 1161065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.0
	I0908 14:47:41.489245 1161065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.0
	I0908 14:47:41.489303 1161065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.0
	I0908 14:47:41.509154 1161065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I0908 14:47:41.509183 1161065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.0
	I0908 14:47:41.557564 1161065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I0908 14:47:41.587872 1161065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I0908 14:47:41.703578 1161065 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.0
	I0908 14:47:41.703721 1161065 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.0
	I0908 14:47:41.707362 1161065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.0
	I0908 14:47:41.707402 1161065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.0
	I0908 14:47:41.707450 1161065 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.0
	I0908 14:47:41.707531 1161065 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.0
	I0908 14:47:41.718915 1161065 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I0908 14:47:41.719031 1161065 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I0908 14:47:41.759895 1161065 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I0908 14:47:41.759975 1161065 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I0908 14:47:41.760008 1161065 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.0': No such file or directory
	I0908 14:47:41.760034 1161065 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I0908 14:47:41.760044 1161065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.0 --> /var/lib/minikube/images/kube-apiserver_v1.34.0 (27077120 bytes)
	I0908 14:47:41.760085 1161065 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I0908 14:47:41.843897 1161065 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0908 14:47:41.861712 1161065 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.0
	I0908 14:47:41.861748 1161065 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I0908 14:47:41.861787 1161065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I0908 14:47:41.861713 1161065 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.0
	I0908 14:47:41.861848 1161065 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.0
	I0908 14:47:41.861856 1161065 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I0908 14:47:41.861719 1161065 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.0': No such file or directory
	I0908 14:47:41.861881 1161065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.0 --> /var/lib/minikube/images/kube-controller-manager_v1.34.0 (22830592 bytes)
	I0908 14:47:41.861875 1161065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (74320896 bytes)
	I0908 14:47:41.861820 1161065 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I0908 14:47:41.861905 1161065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I0908 14:47:41.861932 1161065 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.0
	I0908 14:47:41.965136 1161065 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.0': No such file or directory
	I0908 14:47:41.965163 1161065 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.0': No such file or directory
	I0908 14:47:41.965190 1161065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.0 --> /var/lib/minikube/images/kube-proxy_v1.34.0 (25966080 bytes)
	I0908 14:47:41.965192 1161065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.0 --> /var/lib/minikube/images/kube-scheduler_v1.34.0 (17396736 bytes)
	I0908 14:47:41.965722 1161065 cache_images.go:117] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0908 14:47:41.965770 1161065 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0908 14:47:41.965831 1161065 ssh_runner.go:195] Run: which crictl
	I0908 14:47:42.020378 1161065 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I0908 14:47:42.020483 1161065 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I0908 14:47:42.078919 1161065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0908 14:47:38.720866 1160669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 14:47:39.221095 1160669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 14:47:39.720987 1160669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 14:47:40.221657 1160669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 14:47:40.721314 1160669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 14:47:41.220766 1160669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 14:47:41.721203 1160669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 14:47:42.221617 1160669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 14:47:42.720952 1160669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 14:47:43.221404 1160669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 14:47:43.721317 1160669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 14:47:44.220963 1160669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 14:47:44.720830 1160669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 14:47:45.220623 1160669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 14:47:45.347813 1160669 kubeadm.go:1105] duration metric: took 11.893117029s to wait for elevateKubeSystemPrivileges
	I0908 14:47:45.347887 1160669 kubeadm.go:394] duration metric: took 24.164696368s to StartCluster
	I0908 14:47:45.347916 1160669 settings.go:142] acquiring lock: {Name:mkc208e3a70732deaf67c191918f201f73e82457 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:47:45.348058 1160669 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21508-1116714/kubeconfig
	I0908 14:47:45.349168 1160669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-1116714/kubeconfig: {Name:mk93422b0007d912fa8f198f71d62d01a418d566 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:47:45.349548 1160669 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0908 14:47:45.349550 1160669 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.48 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0908 14:47:45.349640 1160669 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0908 14:47:45.349795 1160669 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-454279"
	I0908 14:47:45.349805 1160669 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-454279"
	I0908 14:47:45.349820 1160669 config.go:182] Loaded profile config "old-k8s-version-454279": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I0908 14:47:45.349826 1160669 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-454279"
	I0908 14:47:45.349836 1160669 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-454279"
	I0908 14:47:45.349870 1160669 host.go:66] Checking if "old-k8s-version-454279" exists ...
	I0908 14:47:45.350341 1160669 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 14:47:45.350382 1160669 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 14:47:45.350391 1160669 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 14:47:45.350418 1160669 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 14:47:45.351120 1160669 out.go:179] * Verifying Kubernetes components...
	I0908 14:47:45.352793 1160669 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 14:47:45.374484 1160669 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39291
	I0908 14:47:45.374717 1160669 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36999
	I0908 14:47:45.375337 1160669 main.go:141] libmachine: () Calling .GetVersion
	I0908 14:47:45.375461 1160669 main.go:141] libmachine: () Calling .GetVersion
	I0908 14:47:45.375918 1160669 main.go:141] libmachine: Using API Version  1
	I0908 14:47:45.375942 1160669 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 14:47:45.376026 1160669 main.go:141] libmachine: Using API Version  1
	I0908 14:47:45.376039 1160669 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 14:47:45.376470 1160669 main.go:141] libmachine: () Calling .GetMachineName
	I0908 14:47:45.376518 1160669 main.go:141] libmachine: () Calling .GetMachineName
	I0908 14:47:45.376708 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetState
	I0908 14:47:45.377155 1160669 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 14:47:45.377198 1160669 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 14:47:45.380946 1160669 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-454279"
	I0908 14:47:45.381009 1160669 host.go:66] Checking if "old-k8s-version-454279" exists ...
	I0908 14:47:45.381428 1160669 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 14:47:45.381483 1160669 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 14:47:45.403210 1160669 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37571
	I0908 14:47:45.403809 1160669 main.go:141] libmachine: () Calling .GetVersion
	I0908 14:47:45.404531 1160669 main.go:141] libmachine: Using API Version  1
	I0908 14:47:45.404563 1160669 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 14:47:45.404875 1160669 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38017
	I0908 14:47:45.405094 1160669 main.go:141] libmachine: () Calling .GetMachineName
	I0908 14:47:45.405298 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetState
	I0908 14:47:45.405577 1160669 main.go:141] libmachine: () Calling .GetVersion
	I0908 14:47:45.406133 1160669 main.go:141] libmachine: Using API Version  1
	I0908 14:47:45.406151 1160669 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 14:47:45.406508 1160669 main.go:141] libmachine: () Calling .GetMachineName
	I0908 14:47:45.406979 1160669 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 14:47:45.407030 1160669 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 14:47:45.407322 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .DriverName
	I0908 14:47:45.410187 1160669 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0908 14:47:45.846353 1161554 start.go:364] duration metric: took 36.603280003s to acquireMachinesLock for "embed-certs-372004"
	I0908 14:47:45.846462 1161554 start.go:93] Provisioning new machine with config: &{Name:embed-certs-372004 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.34.0 ClusterName:embed-certs-372004 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0908 14:47:45.846562 1161554 start.go:125] createHost starting for "" (driver="kvm2")
	I0908 14:47:42.579469 1161065 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I0908 14:47:42.579551 1161065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0908 14:47:42.692219 1161065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0908 14:47:42.778256 1161065 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.0
	I0908 14:47:42.778368 1161065 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.0
	I0908 14:47:42.891505 1161065 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0908 14:47:42.891685 1161065 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0908 14:47:45.512519 1161065 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.0: (2.734115473s)
	I0908 14:47:45.512565 1161065 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.0 from cache
	I0908 14:47:45.512592 1161065 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.0
	I0908 14:47:45.512649 1161065 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.0
	I0908 14:47:45.512649 1161065 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.620929371s)
	I0908 14:47:45.512697 1161065 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0908 14:47:45.512732 1161065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0908 14:47:45.412567 1160669 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 14:47:45.412601 1160669 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0908 14:47:45.412635 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHHostname
	I0908 14:47:45.417707 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHPort
	I0908 14:47:45.417719 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:45.417757 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:56:ae", ip: ""} in network mk-old-k8s-version-454279: {Iface:virbr3 ExpiryTime:2025-09-08 15:47:00 +0000 UTC Type:0 Mac:52:54:00:78:56:ae Iaid: IPaddr:192.168.50.48 Prefix:24 Hostname:old-k8s-version-454279 Clientid:01:52:54:00:78:56:ae}
	I0908 14:47:45.417785 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined IP address 192.168.50.48 and MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:45.418320 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHKeyPath
	I0908 14:47:45.418906 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHUsername
	I0908 14:47:45.419189 1160669 sshutil.go:53] new ssh client: &{IP:192.168.50.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/old-k8s-version-454279/id_rsa Username:docker}
	I0908 14:47:45.428832 1160669 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35541
	I0908 14:47:45.430114 1160669 main.go:141] libmachine: () Calling .GetVersion
	I0908 14:47:45.431127 1160669 main.go:141] libmachine: Using API Version  1
	I0908 14:47:45.431156 1160669 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 14:47:45.432509 1160669 main.go:141] libmachine: () Calling .GetMachineName
	I0908 14:47:45.432730 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetState
	I0908 14:47:45.435061 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .DriverName
	I0908 14:47:45.435429 1160669 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0908 14:47:45.435452 1160669 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0908 14:47:45.435479 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHHostname
	I0908 14:47:45.440341 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:45.440853 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:56:ae", ip: ""} in network mk-old-k8s-version-454279: {Iface:virbr3 ExpiryTime:2025-09-08 15:47:00 +0000 UTC Type:0 Mac:52:54:00:78:56:ae Iaid: IPaddr:192.168.50.48 Prefix:24 Hostname:old-k8s-version-454279 Clientid:01:52:54:00:78:56:ae}
	I0908 14:47:45.440895 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined IP address 192.168.50.48 and MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:45.441132 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHPort
	I0908 14:47:45.441409 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHKeyPath
	I0908 14:47:45.441584 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHUsername
	I0908 14:47:45.441763 1160669 sshutil.go:53] new ssh client: &{IP:192.168.50.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/old-k8s-version-454279/id_rsa Username:docker}
	I0908 14:47:45.742923 1160669 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0908 14:47:45.789308 1160669 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 14:47:46.056326 1160669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 14:47:46.132154 1160669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0908 14:47:48.417231 1160669 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.674257678s)
	I0908 14:47:48.417274 1160669 start.go:976] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0908 14:47:48.418751 1160669 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.629391053s)
	I0908 14:47:48.419470 1160669 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-454279" to be "Ready" ...
	I0908 14:47:48.441291 1160669 node_ready.go:49] node "old-k8s-version-454279" is "Ready"
	I0908 14:47:48.441355 1160669 node_ready.go:38] duration metric: took 21.855187ms for node "old-k8s-version-454279" to be "Ready" ...
	I0908 14:47:48.441379 1160669 api_server.go:52] waiting for apiserver process to appear ...
	I0908 14:47:48.441493 1160669 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 14:47:48.609162 1160669 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.552776998s)
	I0908 14:47:48.609230 1160669 main.go:141] libmachine: Making call to close driver server
	I0908 14:47:48.609244 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .Close
	I0908 14:47:48.609272 1160669 api_server.go:72] duration metric: took 3.25968722s to wait for apiserver process to appear ...
	I0908 14:47:48.609284 1160669 api_server.go:88] waiting for apiserver healthz status ...
	I0908 14:47:48.609321 1160669 api_server.go:253] Checking apiserver healthz at https://192.168.50.48:8443/healthz ...
	I0908 14:47:48.609632 1160669 main.go:141] libmachine: Successfully made call to close driver server
	I0908 14:47:48.609659 1160669 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 14:47:48.609672 1160669 main.go:141] libmachine: Making call to close driver server
	I0908 14:47:48.609699 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .Close
	I0908 14:47:48.609795 1160669 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.477033697s)
	I0908 14:47:48.610022 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | Closing plugin on server side
	I0908 14:47:48.610109 1160669 main.go:141] libmachine: Successfully made call to close driver server
	I0908 14:47:48.610141 1160669 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 14:47:48.610338 1160669 main.go:141] libmachine: Making call to close driver server
	I0908 14:47:48.610402 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .Close
	I0908 14:47:48.610689 1160669 main.go:141] libmachine: Successfully made call to close driver server
	I0908 14:47:48.610709 1160669 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 14:47:48.610718 1160669 main.go:141] libmachine: Making call to close driver server
	I0908 14:47:48.610725 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .Close
	I0908 14:47:48.611820 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | Closing plugin on server side
	I0908 14:47:48.611828 1160669 main.go:141] libmachine: Successfully made call to close driver server
	I0908 14:47:48.611841 1160669 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 14:47:48.623333 1160669 api_server.go:279] https://192.168.50.48:8443/healthz returned 200:
	ok
	I0908 14:47:48.625613 1160669 api_server.go:141] control plane version: v1.28.0
	I0908 14:47:48.625675 1160669 api_server.go:131] duration metric: took 16.381913ms to wait for apiserver health ...
	I0908 14:47:48.625689 1160669 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 14:47:48.651627 1160669 system_pods.go:59] 8 kube-system pods found
	I0908 14:47:48.651722 1160669 system_pods.go:61] "coredns-5dd5756b68-bzzvj" [690695ec-8039-4269-894c-bb8ef49aef3f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:47:48.651748 1160669 system_pods.go:61] "coredns-5dd5756b68-wnv5p" [d97c50cc-9633-4230-b501-5cb90fc1fed6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:47:48.651756 1160669 system_pods.go:61] "etcd-old-k8s-version-454279" [ea25c27d-e993-4978-89bf-8699bd763b8e] Running
	I0908 14:47:48.651763 1160669 system_pods.go:61] "kube-apiserver-old-k8s-version-454279" [93e45f85-1ddb-4873-893b-a0008c4e9e47] Running
	I0908 14:47:48.651771 1160669 system_pods.go:61] "kube-controller-manager-old-k8s-version-454279" [795f0269-31ee-492d-93d4-d58e6378b2a0] Running
	I0908 14:47:48.651779 1160669 system_pods.go:61] "kube-proxy-rjdpq" [4aa93314-791f-4a28-8457-c8c7348a2167] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0908 14:47:48.651785 1160669 system_pods.go:61] "kube-scheduler-old-k8s-version-454279" [451a54a6-51f0-42c8-bde1-99e63b386b9e] Running
	I0908 14:47:48.651790 1160669 system_pods.go:61] "storage-provisioner" [1d11738d-c363-45ab-b2fb-7973140a1b2e] Pending
	I0908 14:47:48.651800 1160669 system_pods.go:74] duration metric: took 26.101765ms to wait for pod list to return data ...
	I0908 14:47:48.651813 1160669 default_sa.go:34] waiting for default service account to be created ...
	I0908 14:47:48.655569 1160669 main.go:141] libmachine: Making call to close driver server
	I0908 14:47:48.655601 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .Close
	I0908 14:47:48.656109 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | Closing plugin on server side
	I0908 14:47:48.656177 1160669 main.go:141] libmachine: Successfully made call to close driver server
	I0908 14:47:48.656189 1160669 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 14:47:48.657580 1160669 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0908 14:47:45.848020 1161554 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0908 14:47:45.848269 1161554 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 14:47:45.848341 1161554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 14:47:45.871830 1161554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33489
	I0908 14:47:45.872436 1161554 main.go:141] libmachine: () Calling .GetVersion
	I0908 14:47:45.873082 1161554 main.go:141] libmachine: Using API Version  1
	I0908 14:47:45.873108 1161554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 14:47:45.873586 1161554 main.go:141] libmachine: () Calling .GetMachineName
	I0908 14:47:45.873785 1161554 main.go:141] libmachine: (embed-certs-372004) Calling .GetMachineName
	I0908 14:47:45.873955 1161554 main.go:141] libmachine: (embed-certs-372004) Calling .DriverName
	I0908 14:47:45.874172 1161554 start.go:159] libmachine.API.Create for "embed-certs-372004" (driver="kvm2")
	I0908 14:47:45.874207 1161554 client.go:168] LocalClient.Create starting
	I0908 14:47:45.874250 1161554 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca.pem
	I0908 14:47:45.874290 1161554 main.go:141] libmachine: Decoding PEM data...
	I0908 14:47:45.874318 1161554 main.go:141] libmachine: Parsing certificate...
	I0908 14:47:45.874393 1161554 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/cert.pem
	I0908 14:47:45.874431 1161554 main.go:141] libmachine: Decoding PEM data...
	I0908 14:47:45.874447 1161554 main.go:141] libmachine: Parsing certificate...
	I0908 14:47:45.874477 1161554 main.go:141] libmachine: Running pre-create checks...
	I0908 14:47:45.874487 1161554 main.go:141] libmachine: (embed-certs-372004) Calling .PreCreateCheck
	I0908 14:47:45.874937 1161554 main.go:141] libmachine: (embed-certs-372004) Calling .GetConfigRaw
	I0908 14:47:45.875461 1161554 main.go:141] libmachine: Creating machine...
	I0908 14:47:45.875478 1161554 main.go:141] libmachine: (embed-certs-372004) Calling .Create
	I0908 14:47:45.875635 1161554 main.go:141] libmachine: (embed-certs-372004) creating KVM machine...
	I0908 14:47:45.875682 1161554 main.go:141] libmachine: (embed-certs-372004) creating network...
	I0908 14:47:45.877282 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | found existing default KVM network
	I0908 14:47:45.878669 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | I0908 14:47:45.878495 1161911 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:c1:a4:09} reservation:<nil>}
	I0908 14:47:45.881284 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | I0908 14:47:45.879355 1161911 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:88:97:37} reservation:<nil>}
	I0908 14:47:45.881324 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | I0908 14:47:45.880084 1161911 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:8f:0a:78} reservation:<nil>}
	I0908 14:47:45.881348 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | I0908 14:47:45.881085 1161911 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002ceac0}
	I0908 14:47:45.881368 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | created network xml: 
	I0908 14:47:45.881375 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | <network>
	I0908 14:47:45.881380 1161554 main.go:141] libmachine: (embed-certs-372004) DBG |   <name>mk-embed-certs-372004</name>
	I0908 14:47:45.881385 1161554 main.go:141] libmachine: (embed-certs-372004) DBG |   <dns enable='no'/>
	I0908 14:47:45.881389 1161554 main.go:141] libmachine: (embed-certs-372004) DBG |   
	I0908 14:47:45.881396 1161554 main.go:141] libmachine: (embed-certs-372004) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0908 14:47:45.881400 1161554 main.go:141] libmachine: (embed-certs-372004) DBG |     <dhcp>
	I0908 14:47:45.881406 1161554 main.go:141] libmachine: (embed-certs-372004) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0908 14:47:45.881410 1161554 main.go:141] libmachine: (embed-certs-372004) DBG |     </dhcp>
	I0908 14:47:45.881414 1161554 main.go:141] libmachine: (embed-certs-372004) DBG |   </ip>
	I0908 14:47:45.881418 1161554 main.go:141] libmachine: (embed-certs-372004) DBG |   
	I0908 14:47:45.881422 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | </network>
	I0908 14:47:45.881426 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | 
	I0908 14:47:45.890786 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | trying to create private KVM network mk-embed-certs-372004 192.168.72.0/24...
	I0908 14:47:46.003232 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | private KVM network mk-embed-certs-372004 192.168.72.0/24 created
	I0908 14:47:46.003502 1161554 main.go:141] libmachine: (embed-certs-372004) setting up store path in /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/embed-certs-372004 ...
	I0908 14:47:46.003538 1161554 main.go:141] libmachine: (embed-certs-372004) building disk image from file:///home/jenkins/minikube-integration/21508-1116714/.minikube/cache/iso/amd64/minikube-v1.36.0-1756980912-21488-amd64.iso
	I0908 14:47:46.003561 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | I0908 14:47:46.003482 1161911 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21508-1116714/.minikube
	I0908 14:47:46.003723 1161554 main.go:141] libmachine: (embed-certs-372004) Downloading /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21508-1116714/.minikube/cache/iso/amd64/minikube-v1.36.0-1756980912-21488-amd64.iso...
	I0908 14:47:46.335755 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | I0908 14:47:46.335566 1161911 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/embed-certs-372004/id_rsa...
	I0908 14:47:46.601582 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | I0908 14:47:46.601395 1161911 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/embed-certs-372004/embed-certs-372004.rawdisk...
	I0908 14:47:46.601613 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | Writing magic tar header
	I0908 14:47:46.601631 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | Writing SSH key tar header
	I0908 14:47:46.601654 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | I0908 14:47:46.601587 1161911 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/embed-certs-372004 ...
	I0908 14:47:46.601773 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/embed-certs-372004
	I0908 14:47:46.601935 1161554 main.go:141] libmachine: (embed-certs-372004) setting executable bit set on /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/embed-certs-372004 (perms=drwx------)
	I0908 14:47:46.602028 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21508-1116714/.minikube/machines
	I0908 14:47:46.602055 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21508-1116714/.minikube
	I0908 14:47:46.602069 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21508-1116714
	I0908 14:47:46.602079 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0908 14:47:46.602093 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | checking permissions on dir: /home/jenkins
	I0908 14:47:46.602101 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | checking permissions on dir: /home
	I0908 14:47:46.602113 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | skipping /home - not owner
	I0908 14:47:46.602130 1161554 main.go:141] libmachine: (embed-certs-372004) setting executable bit set on /home/jenkins/minikube-integration/21508-1116714/.minikube/machines (perms=drwxr-xr-x)
	I0908 14:47:46.602140 1161554 main.go:141] libmachine: (embed-certs-372004) setting executable bit set on /home/jenkins/minikube-integration/21508-1116714/.minikube (perms=drwxr-xr-x)
	I0908 14:47:46.602152 1161554 main.go:141] libmachine: (embed-certs-372004) setting executable bit set on /home/jenkins/minikube-integration/21508-1116714 (perms=drwxrwxr-x)
	I0908 14:47:46.602161 1161554 main.go:141] libmachine: (embed-certs-372004) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0908 14:47:46.602172 1161554 main.go:141] libmachine: (embed-certs-372004) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0908 14:47:46.602180 1161554 main.go:141] libmachine: (embed-certs-372004) creating domain...
	I0908 14:47:46.603813 1161554 main.go:141] libmachine: (embed-certs-372004) define libvirt domain using xml: 
	I0908 14:47:46.603835 1161554 main.go:141] libmachine: (embed-certs-372004) <domain type='kvm'>
	I0908 14:47:46.603843 1161554 main.go:141] libmachine: (embed-certs-372004)   <name>embed-certs-372004</name>
	I0908 14:47:46.603849 1161554 main.go:141] libmachine: (embed-certs-372004)   <memory unit='MiB'>3072</memory>
	I0908 14:47:46.603868 1161554 main.go:141] libmachine: (embed-certs-372004)   <vcpu>2</vcpu>
	I0908 14:47:46.603878 1161554 main.go:141] libmachine: (embed-certs-372004)   <features>
	I0908 14:47:46.603887 1161554 main.go:141] libmachine: (embed-certs-372004)     <acpi/>
	I0908 14:47:46.603893 1161554 main.go:141] libmachine: (embed-certs-372004)     <apic/>
	I0908 14:47:46.603900 1161554 main.go:141] libmachine: (embed-certs-372004)     <pae/>
	I0908 14:47:46.603906 1161554 main.go:141] libmachine: (embed-certs-372004)     
	I0908 14:47:46.603912 1161554 main.go:141] libmachine: (embed-certs-372004)   </features>
	I0908 14:47:46.603919 1161554 main.go:141] libmachine: (embed-certs-372004)   <cpu mode='host-passthrough'>
	I0908 14:47:46.603926 1161554 main.go:141] libmachine: (embed-certs-372004)   
	I0908 14:47:46.603932 1161554 main.go:141] libmachine: (embed-certs-372004)   </cpu>
	I0908 14:47:46.603941 1161554 main.go:141] libmachine: (embed-certs-372004)   <os>
	I0908 14:47:46.603947 1161554 main.go:141] libmachine: (embed-certs-372004)     <type>hvm</type>
	I0908 14:47:46.603955 1161554 main.go:141] libmachine: (embed-certs-372004)     <boot dev='cdrom'/>
	I0908 14:47:46.603963 1161554 main.go:141] libmachine: (embed-certs-372004)     <boot dev='hd'/>
	I0908 14:47:46.603972 1161554 main.go:141] libmachine: (embed-certs-372004)     <bootmenu enable='no'/>
	I0908 14:47:46.603978 1161554 main.go:141] libmachine: (embed-certs-372004)   </os>
	I0908 14:47:46.603987 1161554 main.go:141] libmachine: (embed-certs-372004)   <devices>
	I0908 14:47:46.603995 1161554 main.go:141] libmachine: (embed-certs-372004)     <disk type='file' device='cdrom'>
	I0908 14:47:46.604013 1161554 main.go:141] libmachine: (embed-certs-372004)       <source file='/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/embed-certs-372004/boot2docker.iso'/>
	I0908 14:47:46.604022 1161554 main.go:141] libmachine: (embed-certs-372004)       <target dev='hdc' bus='scsi'/>
	I0908 14:47:46.604029 1161554 main.go:141] libmachine: (embed-certs-372004)       <readonly/>
	I0908 14:47:46.604034 1161554 main.go:141] libmachine: (embed-certs-372004)     </disk>
	I0908 14:47:46.604042 1161554 main.go:141] libmachine: (embed-certs-372004)     <disk type='file' device='disk'>
	I0908 14:47:46.604050 1161554 main.go:141] libmachine: (embed-certs-372004)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0908 14:47:46.604065 1161554 main.go:141] libmachine: (embed-certs-372004)       <source file='/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/embed-certs-372004/embed-certs-372004.rawdisk'/>
	I0908 14:47:46.604073 1161554 main.go:141] libmachine: (embed-certs-372004)       <target dev='hda' bus='virtio'/>
	I0908 14:47:46.604082 1161554 main.go:141] libmachine: (embed-certs-372004)     </disk>
	I0908 14:47:46.604116 1161554 main.go:141] libmachine: (embed-certs-372004)     <interface type='network'>
	I0908 14:47:46.604143 1161554 main.go:141] libmachine: (embed-certs-372004)       <source network='mk-embed-certs-372004'/>
	I0908 14:47:46.604151 1161554 main.go:141] libmachine: (embed-certs-372004)       <model type='virtio'/>
	I0908 14:47:46.604159 1161554 main.go:141] libmachine: (embed-certs-372004)     </interface>
	I0908 14:47:46.604166 1161554 main.go:141] libmachine: (embed-certs-372004)     <interface type='network'>
	I0908 14:47:46.604176 1161554 main.go:141] libmachine: (embed-certs-372004)       <source network='default'/>
	I0908 14:47:46.604183 1161554 main.go:141] libmachine: (embed-certs-372004)       <model type='virtio'/>
	I0908 14:47:46.604191 1161554 main.go:141] libmachine: (embed-certs-372004)     </interface>
	I0908 14:47:46.604202 1161554 main.go:141] libmachine: (embed-certs-372004)     <serial type='pty'>
	I0908 14:47:46.604211 1161554 main.go:141] libmachine: (embed-certs-372004)       <target port='0'/>
	I0908 14:47:46.604218 1161554 main.go:141] libmachine: (embed-certs-372004)     </serial>
	I0908 14:47:46.604227 1161554 main.go:141] libmachine: (embed-certs-372004)     <console type='pty'>
	I0908 14:47:46.604234 1161554 main.go:141] libmachine: (embed-certs-372004)       <target type='serial' port='0'/>
	I0908 14:47:46.604243 1161554 main.go:141] libmachine: (embed-certs-372004)     </console>
	I0908 14:47:46.604251 1161554 main.go:141] libmachine: (embed-certs-372004)     <rng model='virtio'>
	I0908 14:47:46.604260 1161554 main.go:141] libmachine: (embed-certs-372004)       <backend model='random'>/dev/random</backend>
	I0908 14:47:46.604266 1161554 main.go:141] libmachine: (embed-certs-372004)     </rng>
	I0908 14:47:46.604273 1161554 main.go:141] libmachine: (embed-certs-372004)     
	I0908 14:47:46.604279 1161554 main.go:141] libmachine: (embed-certs-372004)     
	I0908 14:47:46.604286 1161554 main.go:141] libmachine: (embed-certs-372004)   </devices>
	I0908 14:47:46.604293 1161554 main.go:141] libmachine: (embed-certs-372004) </domain>
	I0908 14:47:46.604305 1161554 main.go:141] libmachine: (embed-certs-372004) 
	I0908 14:47:46.614959 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | domain embed-certs-372004 has defined MAC address 52:54:00:01:62:d7 in network default
	I0908 14:47:46.615798 1161554 main.go:141] libmachine: (embed-certs-372004) starting domain...
	I0908 14:47:46.615819 1161554 main.go:141] libmachine: (embed-certs-372004) ensuring networks are active...
	I0908 14:47:46.615839 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | domain embed-certs-372004 has defined MAC address 52:54:00:a4:7d:d3 in network mk-embed-certs-372004
	I0908 14:47:46.616924 1161554 main.go:141] libmachine: (embed-certs-372004) Ensuring network default is active
	I0908 14:47:46.617295 1161554 main.go:141] libmachine: (embed-certs-372004) Ensuring network mk-embed-certs-372004 is active
	I0908 14:47:46.618335 1161554 main.go:141] libmachine: (embed-certs-372004) getting domain XML...
	I0908 14:47:46.619436 1161554 main.go:141] libmachine: (embed-certs-372004) creating domain...
	I0908 14:47:47.157066 1161554 main.go:141] libmachine: (embed-certs-372004) waiting for IP...
	I0908 14:47:47.157977 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | domain embed-certs-372004 has defined MAC address 52:54:00:a4:7d:d3 in network mk-embed-certs-372004
	I0908 14:47:47.158511 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | unable to find current IP address of domain embed-certs-372004 in network mk-embed-certs-372004
	I0908 14:47:47.158639 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | I0908 14:47:47.158597 1161911 retry.go:31] will retry after 258.261603ms: waiting for domain to come up
	I0908 14:47:47.418495 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | domain embed-certs-372004 has defined MAC address 52:54:00:a4:7d:d3 in network mk-embed-certs-372004
	I0908 14:47:47.419294 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | unable to find current IP address of domain embed-certs-372004 in network mk-embed-certs-372004
	I0908 14:47:47.419330 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | I0908 14:47:47.419241 1161911 retry.go:31] will retry after 241.609497ms: waiting for domain to come up
	I0908 14:47:47.662948 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | domain embed-certs-372004 has defined MAC address 52:54:00:a4:7d:d3 in network mk-embed-certs-372004
	I0908 14:47:47.663597 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | unable to find current IP address of domain embed-certs-372004 in network mk-embed-certs-372004
	I0908 14:47:47.663634 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | I0908 14:47:47.663559 1161911 retry.go:31] will retry after 304.667685ms: waiting for domain to come up
	I0908 14:47:47.970449 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | domain embed-certs-372004 has defined MAC address 52:54:00:a4:7d:d3 in network mk-embed-certs-372004
	I0908 14:47:47.971048 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | unable to find current IP address of domain embed-certs-372004 in network mk-embed-certs-372004
	I0908 14:47:47.971108 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | I0908 14:47:47.971031 1161911 retry.go:31] will retry after 480.152266ms: waiting for domain to come up
	I0908 14:47:48.453029 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | domain embed-certs-372004 has defined MAC address 52:54:00:a4:7d:d3 in network mk-embed-certs-372004
	I0908 14:47:48.453819 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | unable to find current IP address of domain embed-certs-372004 in network mk-embed-certs-372004
	I0908 14:47:48.454035 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | I0908 14:47:48.453910 1161911 retry.go:31] will retry after 680.820573ms: waiting for domain to come up
	I0908 14:47:49.137093 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | domain embed-certs-372004 has defined MAC address 52:54:00:a4:7d:d3 in network mk-embed-certs-372004
	I0908 14:47:49.137654 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | unable to find current IP address of domain embed-certs-372004 in network mk-embed-certs-372004
	I0908 14:47:49.137684 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | I0908 14:47:49.137630 1161911 retry.go:31] will retry after 741.962797ms: waiting for domain to come up
	I0908 14:47:45.543761 1161261 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0908 14:47:45.543805 1161261 machine.go:96] duration metric: took 6.583470839s to provisionDockerMachine
	I0908 14:47:45.543824 1161261 start.go:293] postStartSetup for "pause-120061" (driver="kvm2")
	I0908 14:47:45.543839 1161261 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 14:47:45.543865 1161261 main.go:141] libmachine: (pause-120061) Calling .DriverName
	I0908 14:47:45.544268 1161261 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 14:47:45.544299 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHHostname
	I0908 14:47:45.548239 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:45.548620 1161261 main.go:141] libmachine: (pause-120061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:49:28", ip: ""} in network mk-pause-120061: {Iface:virbr1 ExpiryTime:2025-09-08 15:45:41 +0000 UTC Type:0 Mac:52:54:00:a0:49:28 Iaid: IPaddr:192.168.61.147 Prefix:24 Hostname:pause-120061 Clientid:01:52:54:00:a0:49:28}
	I0908 14:47:45.548665 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined IP address 192.168.61.147 and MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:45.548918 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHPort
	I0908 14:47:45.549128 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHKeyPath
	I0908 14:47:45.549315 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHUsername
	I0908 14:47:45.549481 1161261 sshutil.go:53] new ssh client: &{IP:192.168.61.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/pause-120061/id_rsa Username:docker}
	I0908 14:47:45.651211 1161261 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 14:47:45.658742 1161261 info.go:137] Remote host: Buildroot 2025.02
	I0908 14:47:45.658788 1161261 filesync.go:126] Scanning /home/jenkins/minikube-integration/21508-1116714/.minikube/addons for local assets ...
	I0908 14:47:45.658868 1161261 filesync.go:126] Scanning /home/jenkins/minikube-integration/21508-1116714/.minikube/files for local assets ...
	I0908 14:47:45.658969 1161261 filesync.go:149] local asset: /home/jenkins/minikube-integration/21508-1116714/.minikube/files/etc/ssl/certs/11208752.pem -> 11208752.pem in /etc/ssl/certs
	I0908 14:47:45.659097 1161261 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0908 14:47:45.676039 1161261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/files/etc/ssl/certs/11208752.pem --> /etc/ssl/certs/11208752.pem (1708 bytes)
	I0908 14:47:45.724138 1161261 start.go:296] duration metric: took 180.282144ms for postStartSetup
	I0908 14:47:45.724193 1161261 fix.go:56] duration metric: took 6.789899375s for fixHost
	I0908 14:47:45.724223 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHHostname
	I0908 14:47:45.727807 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:45.728227 1161261 main.go:141] libmachine: (pause-120061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:49:28", ip: ""} in network mk-pause-120061: {Iface:virbr1 ExpiryTime:2025-09-08 15:45:41 +0000 UTC Type:0 Mac:52:54:00:a0:49:28 Iaid: IPaddr:192.168.61.147 Prefix:24 Hostname:pause-120061 Clientid:01:52:54:00:a0:49:28}
	I0908 14:47:45.728256 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined IP address 192.168.61.147 and MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:45.728609 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHPort
	I0908 14:47:45.728821 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHKeyPath
	I0908 14:47:45.728957 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHKeyPath
	I0908 14:47:45.729071 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHUsername
	I0908 14:47:45.729234 1161261 main.go:141] libmachine: Using SSH client type: native
	I0908 14:47:45.729638 1161261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.61.147 22 <nil> <nil>}
	I0908 14:47:45.729654 1161261 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0908 14:47:45.846172 1161261 main.go:141] libmachine: SSH cmd err, output: <nil>: 1757342865.843199249
	
	I0908 14:47:45.846208 1161261 fix.go:216] guest clock: 1757342865.843199249
	I0908 14:47:45.846220 1161261 fix.go:229] Guest: 2025-09-08 14:47:45.843199249 +0000 UTC Remote: 2025-09-08 14:47:45.724198252 +0000 UTC m=+50.631490013 (delta=119.000997ms)
	I0908 14:47:45.846246 1161261 fix.go:200] guest clock delta is within tolerance: 119.000997ms
	I0908 14:47:45.846254 1161261 start.go:83] releasing machines lock for "pause-120061", held for 6.912017635s
	I0908 14:47:45.846294 1161261 main.go:141] libmachine: (pause-120061) Calling .DriverName
	I0908 14:47:45.846620 1161261 main.go:141] libmachine: (pause-120061) Calling .GetIP
	I0908 14:47:45.849936 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:45.850359 1161261 main.go:141] libmachine: (pause-120061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:49:28", ip: ""} in network mk-pause-120061: {Iface:virbr1 ExpiryTime:2025-09-08 15:45:41 +0000 UTC Type:0 Mac:52:54:00:a0:49:28 Iaid: IPaddr:192.168.61.147 Prefix:24 Hostname:pause-120061 Clientid:01:52:54:00:a0:49:28}
	I0908 14:47:45.850429 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined IP address 192.168.61.147 and MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:45.850680 1161261 main.go:141] libmachine: (pause-120061) Calling .DriverName
	I0908 14:47:45.851390 1161261 main.go:141] libmachine: (pause-120061) Calling .DriverName
	I0908 14:47:45.851623 1161261 main.go:141] libmachine: (pause-120061) Calling .DriverName
	I0908 14:47:45.851760 1161261 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0908 14:47:45.851826 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHHostname
	I0908 14:47:45.851903 1161261 ssh_runner.go:195] Run: cat /version.json
	I0908 14:47:45.851933 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHHostname
	I0908 14:47:45.855883 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:45.856051 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:45.856613 1161261 main.go:141] libmachine: (pause-120061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:49:28", ip: ""} in network mk-pause-120061: {Iface:virbr1 ExpiryTime:2025-09-08 15:45:41 +0000 UTC Type:0 Mac:52:54:00:a0:49:28 Iaid: IPaddr:192.168.61.147 Prefix:24 Hostname:pause-120061 Clientid:01:52:54:00:a0:49:28}
	I0908 14:47:45.856683 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined IP address 192.168.61.147 and MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:45.856713 1161261 main.go:141] libmachine: (pause-120061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:49:28", ip: ""} in network mk-pause-120061: {Iface:virbr1 ExpiryTime:2025-09-08 15:45:41 +0000 UTC Type:0 Mac:52:54:00:a0:49:28 Iaid: IPaddr:192.168.61.147 Prefix:24 Hostname:pause-120061 Clientid:01:52:54:00:a0:49:28}
	I0908 14:47:45.856755 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined IP address 192.168.61.147 and MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:45.857042 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHPort
	I0908 14:47:45.857146 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHPort
	I0908 14:47:45.857256 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHKeyPath
	I0908 14:47:45.857456 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHKeyPath
	I0908 14:47:45.857469 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHUsername
	I0908 14:47:45.857681 1161261 sshutil.go:53] new ssh client: &{IP:192.168.61.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/pause-120061/id_rsa Username:docker}
	I0908 14:47:45.858044 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHUsername
	I0908 14:47:45.858209 1161261 sshutil.go:53] new ssh client: &{IP:192.168.61.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/pause-120061/id_rsa Username:docker}
	I0908 14:47:45.984024 1161261 ssh_runner.go:195] Run: systemctl --version
	I0908 14:47:45.994417 1161261 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0908 14:47:46.189541 1161261 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0908 14:47:46.205243 1161261 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0908 14:47:46.205348 1161261 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 14:47:46.225389 1161261 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0908 14:47:46.225428 1161261 start.go:495] detecting cgroup driver to use...
	I0908 14:47:46.225519 1161261 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0908 14:47:46.259747 1161261 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 14:47:46.288963 1161261 docker.go:218] disabling cri-docker service (if available) ...
	I0908 14:47:46.289158 1161261 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0908 14:47:46.320181 1161261 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0908 14:47:46.347824 1161261 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0908 14:47:46.556387 1161261 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0908 14:47:46.797576 1161261 docker.go:234] disabling docker service ...
	I0908 14:47:46.797675 1161261 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0908 14:47:46.847535 1161261 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0908 14:47:46.878193 1161261 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0908 14:47:47.161555 1161261 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0908 14:47:47.442372 1161261 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 14:47:47.462302 1161261 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 14:47:47.492084 1161261 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0908 14:47:47.492176 1161261 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 14:47:47.508165 1161261 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0908 14:47:47.508295 1161261 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 14:47:47.528597 1161261 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 14:47:47.546925 1161261 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 14:47:47.563039 1161261 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 14:47:47.583391 1161261 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 14:47:47.598701 1161261 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 14:47:47.619434 1161261 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 14:47:47.641052 1161261 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 14:47:47.654092 1161261 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 14:47:47.668357 1161261 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 14:47:47.985180 1161261 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0908 14:47:51.484903 1161261 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.499673595s)
	I0908 14:47:51.484943 1161261 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0908 14:47:51.485020 1161261 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0908 14:47:51.491847 1161261 start.go:563] Will wait 60s for crictl version
	I0908 14:47:51.491926 1161261 ssh_runner.go:195] Run: which crictl
	I0908 14:47:51.497807 1161261 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 14:47:51.555525 1161261 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0908 14:47:51.555677 1161261 ssh_runner.go:195] Run: crio --version
	I0908 14:47:51.590312 1161261 ssh_runner.go:195] Run: crio --version
	I0908 14:47:51.637110 1161261 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	I0908 14:47:48.523994 1161065 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.0: (3.01130862s)
	I0908 14:47:48.524041 1161065 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.0 from cache
	I0908 14:47:48.524073 1161065 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I0908 14:47:48.524132 1161065 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I0908 14:47:50.824020 1161065 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (2.299841923s)
	I0908 14:47:50.824066 1161065 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I0908 14:47:50.824102 1161065 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.0
	I0908 14:47:50.824159 1161065 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.0
	I0908 14:47:48.658564 1160669 addons.go:514] duration metric: took 3.308950977s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0908 14:47:48.660760 1160669 default_sa.go:45] found service account: "default"
	I0908 14:47:48.660792 1160669 default_sa.go:55] duration metric: took 8.963262ms for default service account to be created ...
	I0908 14:47:48.660806 1160669 system_pods.go:116] waiting for k8s-apps to be running ...
	I0908 14:47:48.670524 1160669 system_pods.go:86] 8 kube-system pods found
	I0908 14:47:48.670572 1160669 system_pods.go:89] "coredns-5dd5756b68-bzzvj" [690695ec-8039-4269-894c-bb8ef49aef3f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:47:48.670582 1160669 system_pods.go:89] "coredns-5dd5756b68-wnv5p" [d97c50cc-9633-4230-b501-5cb90fc1fed6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:47:48.670590 1160669 system_pods.go:89] "etcd-old-k8s-version-454279" [ea25c27d-e993-4978-89bf-8699bd763b8e] Running
	I0908 14:47:48.670599 1160669 system_pods.go:89] "kube-apiserver-old-k8s-version-454279" [93e45f85-1ddb-4873-893b-a0008c4e9e47] Running
	I0908 14:47:48.670606 1160669 system_pods.go:89] "kube-controller-manager-old-k8s-version-454279" [795f0269-31ee-492d-93d4-d58e6378b2a0] Running
	I0908 14:47:48.670614 1160669 system_pods.go:89] "kube-proxy-rjdpq" [4aa93314-791f-4a28-8457-c8c7348a2167] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0908 14:47:48.670624 1160669 system_pods.go:89] "kube-scheduler-old-k8s-version-454279" [451a54a6-51f0-42c8-bde1-99e63b386b9e] Running
	I0908 14:47:48.670632 1160669 system_pods.go:89] "storage-provisioner" [1d11738d-c363-45ab-b2fb-7973140a1b2e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 14:47:48.670671 1160669 retry.go:31] will retry after 205.617344ms: missing components: kube-dns, kube-proxy
	I0908 14:47:48.881680 1160669 system_pods.go:86] 8 kube-system pods found
	I0908 14:47:48.881720 1160669 system_pods.go:89] "coredns-5dd5756b68-bzzvj" [690695ec-8039-4269-894c-bb8ef49aef3f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:47:48.881733 1160669 system_pods.go:89] "coredns-5dd5756b68-wnv5p" [d97c50cc-9633-4230-b501-5cb90fc1fed6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:47:48.881739 1160669 system_pods.go:89] "etcd-old-k8s-version-454279" [ea25c27d-e993-4978-89bf-8699bd763b8e] Running
	I0908 14:47:48.881744 1160669 system_pods.go:89] "kube-apiserver-old-k8s-version-454279" [93e45f85-1ddb-4873-893b-a0008c4e9e47] Running
	I0908 14:47:48.881750 1160669 system_pods.go:89] "kube-controller-manager-old-k8s-version-454279" [795f0269-31ee-492d-93d4-d58e6378b2a0] Running
	I0908 14:47:48.881755 1160669 system_pods.go:89] "kube-proxy-rjdpq" [4aa93314-791f-4a28-8457-c8c7348a2167] Running
	I0908 14:47:48.881760 1160669 system_pods.go:89] "kube-scheduler-old-k8s-version-454279" [451a54a6-51f0-42c8-bde1-99e63b386b9e] Running
	I0908 14:47:48.881767 1160669 system_pods.go:89] "storage-provisioner" [1d11738d-c363-45ab-b2fb-7973140a1b2e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 14:47:48.881779 1160669 system_pods.go:126] duration metric: took 220.96307ms to wait for k8s-apps to be running ...
	I0908 14:47:48.881795 1160669 system_svc.go:44] waiting for kubelet service to be running ....
	I0908 14:47:48.881855 1160669 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 14:47:48.901704 1160669 system_svc.go:56] duration metric: took 19.896589ms WaitForService to wait for kubelet
	I0908 14:47:48.901746 1160669 kubeadm.go:578] duration metric: took 3.552161714s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 14:47:48.901771 1160669 node_conditions.go:102] verifying NodePressure condition ...
	I0908 14:47:48.907134 1160669 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0908 14:47:48.907167 1160669 node_conditions.go:123] node cpu capacity is 2
	I0908 14:47:48.907182 1160669 node_conditions.go:105] duration metric: took 5.402366ms to run NodePressure ...
	I0908 14:47:48.907199 1160669 start.go:241] waiting for startup goroutines ...
	I0908 14:47:48.925974 1160669 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-454279" context rescaled to 1 replicas
	I0908 14:47:48.926019 1160669 start.go:246] waiting for cluster config update ...
	I0908 14:47:48.926056 1160669 start.go:255] writing updated cluster config ...
	I0908 14:47:48.926406 1160669 ssh_runner.go:195] Run: rm -f paused
	I0908 14:47:48.935151 1160669 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 14:47:48.946541 1160669 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-bzzvj" in "kube-system" namespace to be "Ready" or be gone ...
	W0908 14:47:50.955115 1160669 pod_ready.go:104] pod "coredns-5dd5756b68-bzzvj" is not "Ready", error: <nil>
	W0908 14:47:52.955892 1160669 pod_ready.go:104] pod "coredns-5dd5756b68-bzzvj" is not "Ready", error: <nil>
	I0908 14:47:49.881971 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | domain embed-certs-372004 has defined MAC address 52:54:00:a4:7d:d3 in network mk-embed-certs-372004
	I0908 14:47:49.882496 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | unable to find current IP address of domain embed-certs-372004 in network mk-embed-certs-372004
	I0908 14:47:49.882571 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | I0908 14:47:49.882485 1161911 retry.go:31] will retry after 1.068110411s: waiting for domain to come up
	I0908 14:47:50.952070 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | domain embed-certs-372004 has defined MAC address 52:54:00:a4:7d:d3 in network mk-embed-certs-372004
	I0908 14:47:50.952673 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | unable to find current IP address of domain embed-certs-372004 in network mk-embed-certs-372004
	I0908 14:47:50.952699 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | I0908 14:47:50.952645 1161911 retry.go:31] will retry after 975.337887ms: waiting for domain to come up
	I0908 14:47:51.931801 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | domain embed-certs-372004 has defined MAC address 52:54:00:a4:7d:d3 in network mk-embed-certs-372004
	I0908 14:47:51.932502 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | unable to find current IP address of domain embed-certs-372004 in network mk-embed-certs-372004
	I0908 14:47:51.932557 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | I0908 14:47:51.932480 1161911 retry.go:31] will retry after 1.756101885s: waiting for domain to come up
	I0908 14:47:53.691128 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | domain embed-certs-372004 has defined MAC address 52:54:00:a4:7d:d3 in network mk-embed-certs-372004
	I0908 14:47:53.691920 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | unable to find current IP address of domain embed-certs-372004 in network mk-embed-certs-372004
	I0908 14:47:53.692141 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | I0908 14:47:53.692087 1161911 retry.go:31] will retry after 1.815249423s: waiting for domain to come up
	I0908 14:47:51.638446 1161261 main.go:141] libmachine: (pause-120061) Calling .GetIP
	I0908 14:47:51.642263 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:51.642744 1161261 main.go:141] libmachine: (pause-120061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:49:28", ip: ""} in network mk-pause-120061: {Iface:virbr1 ExpiryTime:2025-09-08 15:45:41 +0000 UTC Type:0 Mac:52:54:00:a0:49:28 Iaid: IPaddr:192.168.61.147 Prefix:24 Hostname:pause-120061 Clientid:01:52:54:00:a0:49:28}
	I0908 14:47:51.642776 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined IP address 192.168.61.147 and MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:51.643169 1161261 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0908 14:47:51.649711 1161261 kubeadm.go:875] updating cluster {Name:pause-120061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0
ClusterName:pause-120061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.147 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0908 14:47:51.649917 1161261 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 14:47:51.649988 1161261 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 14:47:51.704103 1161261 crio.go:514] all images are preloaded for cri-o runtime.
	I0908 14:47:51.704142 1161261 crio.go:433] Images already preloaded, skipping extraction
	I0908 14:47:51.704223 1161261 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 14:47:51.748253 1161261 crio.go:514] all images are preloaded for cri-o runtime.
	I0908 14:47:51.748292 1161261 cache_images.go:85] Images are preloaded, skipping loading
	I0908 14:47:51.748303 1161261 kubeadm.go:926] updating node { 192.168.61.147 8443 v1.34.0 crio true true} ...
	I0908 14:47:51.748454 1161261 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-120061 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.147
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:pause-120061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0908 14:47:51.748544 1161261 ssh_runner.go:195] Run: crio config
	I0908 14:47:51.824864 1161261 cni.go:84] Creating CNI manager for ""
	I0908 14:47:51.824905 1161261 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0908 14:47:51.824923 1161261 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0908 14:47:51.824965 1161261 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.147 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-120061 NodeName:pause-120061 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.147"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.147 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0908 14:47:51.825192 1161261 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.147
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-120061"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.147"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.147"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0908 14:47:51.825283 1161261 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0908 14:47:51.846600 1161261 binaries.go:44] Found k8s binaries, skipping transfer
	I0908 14:47:51.846699 1161261 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0908 14:47:51.862367 1161261 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0908 14:47:51.890754 1161261 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0908 14:47:51.921238 1161261 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I0908 14:47:51.949413 1161261 ssh_runner.go:195] Run: grep 192.168.61.147	control-plane.minikube.internal$ /etc/hosts
	I0908 14:47:51.955910 1161261 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 14:47:52.155633 1161261 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 14:47:52.176352 1161261 certs.go:68] Setting up /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/pause-120061 for IP: 192.168.61.147
	I0908 14:47:52.176384 1161261 certs.go:194] generating shared ca certs ...
	I0908 14:47:52.176403 1161261 certs.go:226] acquiring lock for ca certs: {Name:mk10dcd85eee4d8b0413bd848f61156bf964b6ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:47:52.176662 1161261 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21508-1116714/.minikube/ca.key
	I0908 14:47:52.176721 1161261 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21508-1116714/.minikube/proxy-client-ca.key
	I0908 14:47:52.176735 1161261 certs.go:256] generating profile certs ...
	I0908 14:47:52.176854 1161261 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/pause-120061/client.key
	I0908 14:47:52.176942 1161261 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/pause-120061/apiserver.key.71e213e0
	I0908 14:47:52.177028 1161261 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/pause-120061/proxy-client.key
	I0908 14:47:52.177196 1161261 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/1120875.pem (1338 bytes)
	W0908 14:47:52.177239 1161261 certs.go:480] ignoring /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/1120875_empty.pem, impossibly tiny 0 bytes
	I0908 14:47:52.177253 1161261 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca-key.pem (1679 bytes)
	I0908 14:47:52.177292 1161261 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca.pem (1082 bytes)
	I0908 14:47:52.177334 1161261 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/cert.pem (1123 bytes)
	I0908 14:47:52.177362 1161261 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/key.pem (1675 bytes)
	I0908 14:47:52.177417 1161261 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-1116714/.minikube/files/etc/ssl/certs/11208752.pem (1708 bytes)
	I0908 14:47:52.178125 1161261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0908 14:47:52.216860 1161261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0908 14:47:52.264992 1161261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0908 14:47:52.315906 1161261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0908 14:47:52.366512 1161261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/pause-120061/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0908 14:47:52.407534 1161261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/pause-120061/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0908 14:47:52.457127 1161261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/pause-120061/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0908 14:47:52.505152 1161261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/pause-120061/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0908 14:47:52.549547 1161261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/files/etc/ssl/certs/11208752.pem --> /usr/share/ca-certificates/11208752.pem (1708 bytes)
	I0908 14:47:52.588151 1161261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0908 14:47:52.629239 1161261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/1120875.pem --> /usr/share/ca-certificates/1120875.pem (1338 bytes)
	I0908 14:47:52.666334 1161261 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0908 14:47:52.692809 1161261 ssh_runner.go:195] Run: openssl version
	I0908 14:47:52.700407 1161261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11208752.pem && ln -fs /usr/share/ca-certificates/11208752.pem /etc/ssl/certs/11208752.pem"
	I0908 14:47:52.717734 1161261 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11208752.pem
	I0908 14:47:52.725301 1161261 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  8 13:46 /usr/share/ca-certificates/11208752.pem
	I0908 14:47:52.725396 1161261 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11208752.pem
	I0908 14:47:52.735515 1161261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11208752.pem /etc/ssl/certs/3ec20f2e.0"
	I0908 14:47:52.751195 1161261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0908 14:47:52.769652 1161261 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0908 14:47:52.777129 1161261 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  8 13:35 /usr/share/ca-certificates/minikubeCA.pem
	I0908 14:47:52.777209 1161261 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0908 14:47:52.787042 1161261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0908 14:47:52.803329 1161261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1120875.pem && ln -fs /usr/share/ca-certificates/1120875.pem /etc/ssl/certs/1120875.pem"
	I0908 14:47:52.822959 1161261 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1120875.pem
	I0908 14:47:52.831158 1161261 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  8 13:46 /usr/share/ca-certificates/1120875.pem
	I0908 14:47:52.831251 1161261 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1120875.pem
	I0908 14:47:52.848780 1161261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1120875.pem /etc/ssl/certs/51391683.0"
	I0908 14:47:52.910305 1161261 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0908 14:47:52.947063 1161261 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0908 14:47:52.980746 1161261 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0908 14:47:53.017172 1161261 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0908 14:47:53.029502 1161261 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0908 14:47:53.050518 1161261 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0908 14:47:53.066057 1161261 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0908 14:47:53.090136 1161261 kubeadm.go:392] StartCluster: {Name:pause-120061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 Cl
usterName:pause-120061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.147 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 14:47:53.090336 1161261 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0908 14:47:53.090436 1161261 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0908 14:47:53.258288 1161261 cri.go:89] found id: "f396885ab602525616471c4a3078ab5befab72cec72eb50c586e5eb321dbf922"
	I0908 14:47:53.258340 1161261 cri.go:89] found id: "6f6f4bdc578435a925c85945bddfe6a5ac8b51b3cc376b776a33a1d585bd2c29"
	I0908 14:47:53.258348 1161261 cri.go:89] found id: "6936912d89250ecd151886026e92e7d034661849c0bfab75a31547b61a0fe66a"
	I0908 14:47:53.258352 1161261 cri.go:89] found id: "ee305c82781917bfbaab4b509ef785aeb3b96bd60c2ec05530b1c3d48a225512"
	I0908 14:47:53.258356 1161261 cri.go:89] found id: "06f87ac3295d31633f69192af6ed4823f0bf18648983434dcaa6db09d069d6bd"
	I0908 14:47:53.258361 1161261 cri.go:89] found id: "8ed8110fce0f009048f3aca5ce0a9a67946864f102d5a3e3a5da1c1053c5cb04"
	I0908 14:47:53.258366 1161261 cri.go:89] found id: ""
	I0908 14:47:53.258430 1161261 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-120061 -n pause-120061
helpers_test.go:269: (dbg) Run:  kubectl --context pause-120061 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-120061 -n pause-120061
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-120061 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-120061 logs -n 25: (2.346409393s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────────────
──┐
	│ COMMAND │                                                                                                                    ARGS                                                                                                                     │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────────────
──┤
	│ ssh     │ -p cilium-814283 sudo cri-dockerd --version                                                                                                                                                                                                 │ cilium-814283             │ jenkins │ v1.36.0 │ 08 Sep 25 14:45 UTC │                     │
	│ ssh     │ -p cilium-814283 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                   │ cilium-814283             │ jenkins │ v1.36.0 │ 08 Sep 25 14:45 UTC │                     │
	│ ssh     │ -p cilium-814283 sudo systemctl cat containerd --no-pager                                                                                                                                                                                   │ cilium-814283             │ jenkins │ v1.36.0 │ 08 Sep 25 14:45 UTC │                     │
	│ ssh     │ -p cilium-814283 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                            │ cilium-814283             │ jenkins │ v1.36.0 │ 08 Sep 25 14:45 UTC │                     │
	│ ssh     │ -p cilium-814283 sudo cat /etc/containerd/config.toml                                                                                                                                                                                       │ cilium-814283             │ jenkins │ v1.36.0 │ 08 Sep 25 14:45 UTC │                     │
	│ ssh     │ -p cilium-814283 sudo containerd config dump                                                                                                                                                                                                │ cilium-814283             │ jenkins │ v1.36.0 │ 08 Sep 25 14:45 UTC │                     │
	│ ssh     │ -p cilium-814283 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                         │ cilium-814283             │ jenkins │ v1.36.0 │ 08 Sep 25 14:45 UTC │                     │
	│ ssh     │ -p cilium-814283 sudo systemctl cat crio --no-pager                                                                                                                                                                                         │ cilium-814283             │ jenkins │ v1.36.0 │ 08 Sep 25 14:45 UTC │                     │
	│ ssh     │ -p cilium-814283 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                               │ cilium-814283             │ jenkins │ v1.36.0 │ 08 Sep 25 14:45 UTC │                     │
	│ ssh     │ -p cilium-814283 sudo crio config                                                                                                                                                                                                           │ cilium-814283             │ jenkins │ v1.36.0 │ 08 Sep 25 14:45 UTC │                     │
	│ delete  │ -p cilium-814283                                                                                                                                                                                                                            │ cilium-814283             │ jenkins │ v1.36.0 │ 08 Sep 25 14:45 UTC │ 08 Sep 25 14:45 UTC │
	│ start   │ -p force-systemd-flag-847393 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                                                                                                   │ force-systemd-flag-847393 │ jenkins │ v1.36.0 │ 08 Sep 25 14:45 UTC │ 08 Sep 25 14:46 UTC │
	│ delete  │ -p cert-expiration-001432                                                                                                                                                                                                                   │ cert-expiration-001432    │ jenkins │ v1.36.0 │ 08 Sep 25 14:45 UTC │ 08 Sep 25 14:45 UTC │
	│ start   │ -p cert-options-110049 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio                     │ cert-options-110049       │ jenkins │ v1.36.0 │ 08 Sep 25 14:45 UTC │ 08 Sep 25 14:47 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile running-upgrade-448633 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker                                                                                 │ running-upgrade-448633    │ jenkins │ v1.36.0 │ 08 Sep 25 14:46 UTC │                     │
	│ delete  │ -p running-upgrade-448633                                                                                                                                                                                                                   │ running-upgrade-448633    │ jenkins │ v1.36.0 │ 08 Sep 25 14:46 UTC │ 08 Sep 25 14:46 UTC │
	│ start   │ -p old-k8s-version-454279 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-454279    │ jenkins │ v1.36.0 │ 08 Sep 25 14:46 UTC │                     │
	│ ssh     │ force-systemd-flag-847393 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                        │ force-systemd-flag-847393 │ jenkins │ v1.36.0 │ 08 Sep 25 14:46 UTC │ 08 Sep 25 14:46 UTC │
	│ delete  │ -p force-systemd-flag-847393                                                                                                                                                                                                                │ force-systemd-flag-847393 │ jenkins │ v1.36.0 │ 08 Sep 25 14:46 UTC │ 08 Sep 25 14:46 UTC │
	│ start   │ -p no-preload-301894 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                       │ no-preload-301894         │ jenkins │ v1.36.0 │ 08 Sep 25 14:46 UTC │                     │
	│ start   │ -p pause-120061 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                                                                                              │ pause-120061              │ jenkins │ v1.36.0 │ 08 Sep 25 14:46 UTC │ 08 Sep 25 14:48 UTC │
	│ ssh     │ cert-options-110049 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                 │ cert-options-110049       │ jenkins │ v1.36.0 │ 08 Sep 25 14:47 UTC │ 08 Sep 25 14:47 UTC │
	│ ssh     │ -p cert-options-110049 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                               │ cert-options-110049       │ jenkins │ v1.36.0 │ 08 Sep 25 14:47 UTC │ 08 Sep 25 14:47 UTC │
	│ delete  │ -p cert-options-110049                                                                                                                                                                                                                      │ cert-options-110049       │ jenkins │ v1.36.0 │ 08 Sep 25 14:47 UTC │ 08 Sep 25 14:47 UTC │
	│ start   │ -p embed-certs-372004 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                        │ embed-certs-372004        │ jenkins │ v1.36.0 │ 08 Sep 25 14:47 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────────────
──┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 14:47:09
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 14:47:09.160568 1161554 out.go:360] Setting OutFile to fd 1 ...
	I0908 14:47:09.160683 1161554 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 14:47:09.160689 1161554 out.go:374] Setting ErrFile to fd 2...
	I0908 14:47:09.160695 1161554 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 14:47:09.160939 1161554 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-1116714/.minikube/bin
	I0908 14:47:09.161680 1161554 out.go:368] Setting JSON to false
	I0908 14:47:09.162744 1161554 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":19773,"bootTime":1757323056,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 14:47:09.162871 1161554 start.go:140] virtualization: kvm guest
	I0908 14:47:09.165021 1161554 out.go:179] * [embed-certs-372004] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0908 14:47:09.166691 1161554 notify.go:220] Checking for updates...
	I0908 14:47:09.166731 1161554 out.go:179]   - MINIKUBE_LOCATION=21508
	I0908 14:47:09.168900 1161554 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 14:47:09.170377 1161554 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21508-1116714/kubeconfig
	I0908 14:47:09.171507 1161554 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-1116714/.minikube
	I0908 14:47:09.172730 1161554 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 14:47:09.173985 1161554 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 14:47:09.175713 1161554 config.go:182] Loaded profile config "no-preload-301894": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 14:47:09.175835 1161554 config.go:182] Loaded profile config "old-k8s-version-454279": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I0908 14:47:09.175952 1161554 config.go:182] Loaded profile config "pause-120061": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 14:47:09.176071 1161554 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 14:47:09.218786 1161554 out.go:179] * Using the kvm2 driver based on user configuration
	I0908 14:47:09.220218 1161554 start.go:304] selected driver: kvm2
	I0908 14:47:09.220247 1161554 start.go:918] validating driver "kvm2" against <nil>
	I0908 14:47:09.220264 1161554 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 14:47:09.221394 1161554 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 14:47:09.221493 1161554 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21508-1116714/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0908 14:47:09.238868 1161554 install.go:137] /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0908 14:47:09.238946 1161554 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0908 14:47:09.239238 1161554 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 14:47:09.239288 1161554 cni.go:84] Creating CNI manager for ""
	I0908 14:47:09.239343 1161554 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0908 14:47:09.239356 1161554 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0908 14:47:09.239447 1161554 start.go:348] cluster config:
	{Name:embed-certs-372004 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:embed-certs-372004 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I0908 14:47:09.239572 1161554 iso.go:125] acquiring lock: {Name:mk347390bf24761f2c39bf4cd5b718f157a50faa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 14:47:09.241462 1161554 out.go:179] * Starting "embed-certs-372004" primary control-plane node in "embed-certs-372004" cluster
	I0908 14:47:13.234419 1161065 start.go:364] duration metric: took 31.013485176s to acquireMachinesLock for "no-preload-301894"
	I0908 14:47:13.234502 1161065 start.go:93] Provisioning new machine with config: &{Name:no-preload-301894 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.34.0 ClusterName:no-preload-301894 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0908 14:47:13.234615 1161065 start.go:125] createHost starting for "" (driver="kvm2")
	I0908 14:47:08.421613 1160669 main.go:141] libmachine: (old-k8s-version-454279) reserved static IP address 192.168.50.48 for domain old-k8s-version-454279
	I0908 14:47:08.421639 1160669 main.go:141] libmachine: (old-k8s-version-454279) waiting for SSH...
	I0908 14:47:08.421827 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | Getting to WaitForSSH function...
	I0908 14:47:08.425019 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:08.425509 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:78:56:ae", ip: ""} in network mk-old-k8s-version-454279
	I0908 14:47:08.425534 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | unable to find defined IP address of network mk-old-k8s-version-454279 interface with MAC address 52:54:00:78:56:ae
	I0908 14:47:08.425750 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | Using SSH client type: external
	I0908 14:47:08.425784 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | Using SSH private key: /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/old-k8s-version-454279/id_rsa (-rw-------)
	I0908 14:47:08.425843 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/old-k8s-version-454279/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0908 14:47:08.425862 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | About to run SSH command:
	I0908 14:47:08.425880 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | exit 0
	I0908 14:47:08.430385 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | SSH cmd err, output: exit status 255: 
	I0908 14:47:08.430424 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0908 14:47:08.430434 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | command : exit 0
	I0908 14:47:08.430439 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | err     : exit status 255
	I0908 14:47:08.430448 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | output  : 
	I0908 14:47:11.432171 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | Getting to WaitForSSH function...
	I0908 14:47:11.435749 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:11.436378 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:56:ae", ip: ""} in network mk-old-k8s-version-454279: {Iface:virbr3 ExpiryTime:2025-09-08 15:47:00 +0000 UTC Type:0 Mac:52:54:00:78:56:ae Iaid: IPaddr:192.168.50.48 Prefix:24 Hostname:old-k8s-version-454279 Clientid:01:52:54:00:78:56:ae}
	I0908 14:47:11.436414 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined IP address 192.168.50.48 and MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:11.436668 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | Using SSH client type: external
	I0908 14:47:11.436689 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | Using SSH private key: /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/old-k8s-version-454279/id_rsa (-rw-------)
	I0908 14:47:11.436753 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.48 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/old-k8s-version-454279/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0908 14:47:11.436774 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | About to run SSH command:
	I0908 14:47:11.436787 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | exit 0
	I0908 14:47:11.569076 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | SSH cmd err, output: <nil>: 
	I0908 14:47:11.569330 1160669 main.go:141] libmachine: (old-k8s-version-454279) KVM machine creation complete
	I0908 14:47:11.569697 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetConfigRaw
	I0908 14:47:11.570442 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .DriverName
	I0908 14:47:11.570678 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .DriverName
	I0908 14:47:11.570867 1160669 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0908 14:47:11.570882 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetState
	I0908 14:47:11.572530 1160669 main.go:141] libmachine: Detecting operating system of created instance...
	I0908 14:47:11.572548 1160669 main.go:141] libmachine: Waiting for SSH to be available...
	I0908 14:47:11.572554 1160669 main.go:141] libmachine: Getting to WaitForSSH function...
	I0908 14:47:11.572562 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHHostname
	I0908 14:47:11.575449 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:11.575866 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:56:ae", ip: ""} in network mk-old-k8s-version-454279: {Iface:virbr3 ExpiryTime:2025-09-08 15:47:00 +0000 UTC Type:0 Mac:52:54:00:78:56:ae Iaid: IPaddr:192.168.50.48 Prefix:24 Hostname:old-k8s-version-454279 Clientid:01:52:54:00:78:56:ae}
	I0908 14:47:11.575893 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined IP address 192.168.50.48 and MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:11.576075 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHPort
	I0908 14:47:11.576303 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHKeyPath
	I0908 14:47:11.576473 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHKeyPath
	I0908 14:47:11.576619 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHUsername
	I0908 14:47:11.576834 1160669 main.go:141] libmachine: Using SSH client type: native
	I0908 14:47:11.577105 1160669 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.50.48 22 <nil> <nil>}
	I0908 14:47:11.577117 1160669 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0908 14:47:11.696175 1160669 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 14:47:11.696207 1160669 main.go:141] libmachine: Detecting the provisioner...
	I0908 14:47:11.696217 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHHostname
	I0908 14:47:11.699719 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:11.700138 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:56:ae", ip: ""} in network mk-old-k8s-version-454279: {Iface:virbr3 ExpiryTime:2025-09-08 15:47:00 +0000 UTC Type:0 Mac:52:54:00:78:56:ae Iaid: IPaddr:192.168.50.48 Prefix:24 Hostname:old-k8s-version-454279 Clientid:01:52:54:00:78:56:ae}
	I0908 14:47:11.700159 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined IP address 192.168.50.48 and MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:11.700334 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHPort
	I0908 14:47:11.700589 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHKeyPath
	I0908 14:47:11.700796 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHKeyPath
	I0908 14:47:11.700947 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHUsername
	I0908 14:47:11.701143 1160669 main.go:141] libmachine: Using SSH client type: native
	I0908 14:47:11.701350 1160669 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.50.48 22 <nil> <nil>}
	I0908 14:47:11.701361 1160669 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0908 14:47:11.821894 1160669 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I0908 14:47:11.822006 1160669 main.go:141] libmachine: found compatible host: buildroot
	I0908 14:47:11.822037 1160669 main.go:141] libmachine: Provisioning with buildroot...
	I0908 14:47:11.822052 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetMachineName
	I0908 14:47:11.822417 1160669 buildroot.go:166] provisioning hostname "old-k8s-version-454279"
	I0908 14:47:11.822451 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetMachineName
	I0908 14:47:11.822694 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHHostname
	I0908 14:47:11.827383 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:11.827954 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:56:ae", ip: ""} in network mk-old-k8s-version-454279: {Iface:virbr3 ExpiryTime:2025-09-08 15:47:00 +0000 UTC Type:0 Mac:52:54:00:78:56:ae Iaid: IPaddr:192.168.50.48 Prefix:24 Hostname:old-k8s-version-454279 Clientid:01:52:54:00:78:56:ae}
	I0908 14:47:11.827998 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined IP address 192.168.50.48 and MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:11.828197 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHPort
	I0908 14:47:11.828461 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHKeyPath
	I0908 14:47:11.828657 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHKeyPath
	I0908 14:47:11.828803 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHUsername
	I0908 14:47:11.829021 1160669 main.go:141] libmachine: Using SSH client type: native
	I0908 14:47:11.829259 1160669 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.50.48 22 <nil> <nil>}
	I0908 14:47:11.829278 1160669 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-454279 && echo "old-k8s-version-454279" | sudo tee /etc/hostname
	I0908 14:47:11.970256 1160669 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-454279
	
	I0908 14:47:11.970285 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHHostname
	I0908 14:47:11.973594 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:11.974161 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:56:ae", ip: ""} in network mk-old-k8s-version-454279: {Iface:virbr3 ExpiryTime:2025-09-08 15:47:00 +0000 UTC Type:0 Mac:52:54:00:78:56:ae Iaid: IPaddr:192.168.50.48 Prefix:24 Hostname:old-k8s-version-454279 Clientid:01:52:54:00:78:56:ae}
	I0908 14:47:11.974183 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined IP address 192.168.50.48 and MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:11.974497 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHPort
	I0908 14:47:11.974721 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHKeyPath
	I0908 14:47:11.974906 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHKeyPath
	I0908 14:47:11.975126 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHUsername
	I0908 14:47:11.975320 1160669 main.go:141] libmachine: Using SSH client type: native
	I0908 14:47:11.975562 1160669 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.50.48 22 <nil> <nil>}
	I0908 14:47:11.975605 1160669 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-454279' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-454279/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-454279' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 14:47:12.104712 1160669 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 14:47:12.104744 1160669 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21508-1116714/.minikube CaCertPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21508-1116714/.minikube}
	I0908 14:47:12.104764 1160669 buildroot.go:174] setting up certificates
	I0908 14:47:12.104774 1160669 provision.go:84] configureAuth start
	I0908 14:47:12.104783 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetMachineName
	I0908 14:47:12.105185 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetIP
	I0908 14:47:12.108318 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:12.108694 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:56:ae", ip: ""} in network mk-old-k8s-version-454279: {Iface:virbr3 ExpiryTime:2025-09-08 15:47:00 +0000 UTC Type:0 Mac:52:54:00:78:56:ae Iaid: IPaddr:192.168.50.48 Prefix:24 Hostname:old-k8s-version-454279 Clientid:01:52:54:00:78:56:ae}
	I0908 14:47:12.108727 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined IP address 192.168.50.48 and MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:12.109039 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHHostname
	I0908 14:47:12.111754 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:12.112092 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:56:ae", ip: ""} in network mk-old-k8s-version-454279: {Iface:virbr3 ExpiryTime:2025-09-08 15:47:00 +0000 UTC Type:0 Mac:52:54:00:78:56:ae Iaid: IPaddr:192.168.50.48 Prefix:24 Hostname:old-k8s-version-454279 Clientid:01:52:54:00:78:56:ae}
	I0908 14:47:12.112124 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined IP address 192.168.50.48 and MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:12.112306 1160669 provision.go:143] copyHostCerts
	I0908 14:47:12.112402 1160669 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-1116714/.minikube/key.pem, removing ...
	I0908 14:47:12.112418 1160669 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-1116714/.minikube/key.pem
	I0908 14:47:12.112486 1160669 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21508-1116714/.minikube/key.pem (1675 bytes)
	I0908 14:47:12.112586 1160669 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-1116714/.minikube/ca.pem, removing ...
	I0908 14:47:12.112595 1160669 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-1116714/.minikube/ca.pem
	I0908 14:47:12.112614 1160669 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21508-1116714/.minikube/ca.pem (1082 bytes)
	I0908 14:47:12.112663 1160669 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-1116714/.minikube/cert.pem, removing ...
	I0908 14:47:12.112670 1160669 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-1116714/.minikube/cert.pem
	I0908 14:47:12.112687 1160669 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21508-1116714/.minikube/cert.pem (1123 bytes)
	I0908 14:47:12.112731 1160669 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-454279 san=[127.0.0.1 192.168.50.48 localhost minikube old-k8s-version-454279]
	I0908 14:47:12.456603 1160669 provision.go:177] copyRemoteCerts
	I0908 14:47:12.456689 1160669 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 14:47:12.456720 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHHostname
	I0908 14:47:12.459997 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:12.460440 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:56:ae", ip: ""} in network mk-old-k8s-version-454279: {Iface:virbr3 ExpiryTime:2025-09-08 15:47:00 +0000 UTC Type:0 Mac:52:54:00:78:56:ae Iaid: IPaddr:192.168.50.48 Prefix:24 Hostname:old-k8s-version-454279 Clientid:01:52:54:00:78:56:ae}
	I0908 14:47:12.460462 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined IP address 192.168.50.48 and MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:12.460632 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHPort
	I0908 14:47:12.460892 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHKeyPath
	I0908 14:47:12.461102 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHUsername
	I0908 14:47:12.461282 1160669 sshutil.go:53] new ssh client: &{IP:192.168.50.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/old-k8s-version-454279/id_rsa Username:docker}
	I0908 14:47:12.555929 1160669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0908 14:47:12.587739 1160669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0908 14:47:12.619560 1160669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0908 14:47:12.653024 1160669 provision.go:87] duration metric: took 548.233152ms to configureAuth
	I0908 14:47:12.653061 1160669 buildroot.go:189] setting minikube options for container-runtime
	I0908 14:47:12.653249 1160669 config.go:182] Loaded profile config "old-k8s-version-454279": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I0908 14:47:12.653344 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHHostname
	I0908 14:47:12.656324 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:12.656711 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:56:ae", ip: ""} in network mk-old-k8s-version-454279: {Iface:virbr3 ExpiryTime:2025-09-08 15:47:00 +0000 UTC Type:0 Mac:52:54:00:78:56:ae Iaid: IPaddr:192.168.50.48 Prefix:24 Hostname:old-k8s-version-454279 Clientid:01:52:54:00:78:56:ae}
	I0908 14:47:12.656762 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined IP address 192.168.50.48 and MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:12.656968 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHPort
	I0908 14:47:12.657232 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHKeyPath
	I0908 14:47:12.657399 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHKeyPath
	I0908 14:47:12.657567 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHUsername
	I0908 14:47:12.657755 1160669 main.go:141] libmachine: Using SSH client type: native
	I0908 14:47:12.657974 1160669 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.50.48 22 <nil> <nil>}
	I0908 14:47:12.657989 1160669 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0908 14:47:12.942523 1160669 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0908 14:47:12.942555 1160669 main.go:141] libmachine: Checking connection to Docker...
	I0908 14:47:12.942568 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetURL
	I0908 14:47:12.944034 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | using libvirt version 6000000
	I0908 14:47:12.947008 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:12.947476 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:56:ae", ip: ""} in network mk-old-k8s-version-454279: {Iface:virbr3 ExpiryTime:2025-09-08 15:47:00 +0000 UTC Type:0 Mac:52:54:00:78:56:ae Iaid: IPaddr:192.168.50.48 Prefix:24 Hostname:old-k8s-version-454279 Clientid:01:52:54:00:78:56:ae}
	I0908 14:47:12.947510 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined IP address 192.168.50.48 and MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:12.947716 1160669 main.go:141] libmachine: Docker is up and running!
	I0908 14:47:12.947731 1160669 main.go:141] libmachine: Reticulating splines...
	I0908 14:47:12.947738 1160669 client.go:171] duration metric: took 28.558043276s to LocalClient.Create
	I0908 14:47:12.947766 1160669 start.go:167] duration metric: took 28.55812507s to libmachine.API.Create "old-k8s-version-454279"
	I0908 14:47:12.947781 1160669 start.go:293] postStartSetup for "old-k8s-version-454279" (driver="kvm2")
	I0908 14:47:12.947797 1160669 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 14:47:12.947820 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .DriverName
	I0908 14:47:12.948102 1160669 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 14:47:12.948128 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHHostname
	I0908 14:47:12.950626 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:12.950966 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:56:ae", ip: ""} in network mk-old-k8s-version-454279: {Iface:virbr3 ExpiryTime:2025-09-08 15:47:00 +0000 UTC Type:0 Mac:52:54:00:78:56:ae Iaid: IPaddr:192.168.50.48 Prefix:24 Hostname:old-k8s-version-454279 Clientid:01:52:54:00:78:56:ae}
	I0908 14:47:12.950991 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined IP address 192.168.50.48 and MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:12.951166 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHPort
	I0908 14:47:12.951368 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHKeyPath
	I0908 14:47:12.951564 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHUsername
	I0908 14:47:12.951709 1160669 sshutil.go:53] new ssh client: &{IP:192.168.50.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/old-k8s-version-454279/id_rsa Username:docker}
	I0908 14:47:13.045854 1160669 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 14:47:13.051916 1160669 info.go:137] Remote host: Buildroot 2025.02
	I0908 14:47:13.051954 1160669 filesync.go:126] Scanning /home/jenkins/minikube-integration/21508-1116714/.minikube/addons for local assets ...
	I0908 14:47:13.052050 1160669 filesync.go:126] Scanning /home/jenkins/minikube-integration/21508-1116714/.minikube/files for local assets ...
	I0908 14:47:13.052159 1160669 filesync.go:149] local asset: /home/jenkins/minikube-integration/21508-1116714/.minikube/files/etc/ssl/certs/11208752.pem -> 11208752.pem in /etc/ssl/certs
	I0908 14:47:13.052292 1160669 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0908 14:47:13.066204 1160669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/files/etc/ssl/certs/11208752.pem --> /etc/ssl/certs/11208752.pem (1708 bytes)
	I0908 14:47:13.105685 1160669 start.go:296] duration metric: took 157.882889ms for postStartSetup
	I0908 14:47:13.105743 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetConfigRaw
	I0908 14:47:13.106484 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetIP
	I0908 14:47:13.109310 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:13.109734 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:56:ae", ip: ""} in network mk-old-k8s-version-454279: {Iface:virbr3 ExpiryTime:2025-09-08 15:47:00 +0000 UTC Type:0 Mac:52:54:00:78:56:ae Iaid: IPaddr:192.168.50.48 Prefix:24 Hostname:old-k8s-version-454279 Clientid:01:52:54:00:78:56:ae}
	I0908 14:47:13.109760 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined IP address 192.168.50.48 and MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:13.110162 1160669 profile.go:143] Saving config to /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/config.json ...
	I0908 14:47:13.110392 1160669 start.go:128] duration metric: took 28.743951957s to createHost
	I0908 14:47:13.110424 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHHostname
	I0908 14:47:13.113374 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:13.113818 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:56:ae", ip: ""} in network mk-old-k8s-version-454279: {Iface:virbr3 ExpiryTime:2025-09-08 15:47:00 +0000 UTC Type:0 Mac:52:54:00:78:56:ae Iaid: IPaddr:192.168.50.48 Prefix:24 Hostname:old-k8s-version-454279 Clientid:01:52:54:00:78:56:ae}
	I0908 14:47:13.113847 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined IP address 192.168.50.48 and MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:13.114065 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHPort
	I0908 14:47:13.114325 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHKeyPath
	I0908 14:47:13.114518 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHKeyPath
	I0908 14:47:13.114687 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHUsername
	I0908 14:47:13.114875 1160669 main.go:141] libmachine: Using SSH client type: native
	I0908 14:47:13.115133 1160669 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.50.48 22 <nil> <nil>}
	I0908 14:47:13.115147 1160669 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0908 14:47:13.234172 1160669 main.go:141] libmachine: SSH cmd err, output: <nil>: 1757342833.210015423
	
	I0908 14:47:13.234205 1160669 fix.go:216] guest clock: 1757342833.210015423
	I0908 14:47:13.234217 1160669 fix.go:229] Guest: 2025-09-08 14:47:13.210015423 +0000 UTC Remote: 2025-09-08 14:47:13.110406104 +0000 UTC m=+59.772811959 (delta=99.609319ms)
	I0908 14:47:13.234297 1160669 fix.go:200] guest clock delta is within tolerance: 99.609319ms
	I0908 14:47:13.234318 1160669 start.go:83] releasing machines lock for "old-k8s-version-454279", held for 28.868136263s
	I0908 14:47:13.234361 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .DriverName
	I0908 14:47:13.234709 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetIP
	I0908 14:47:13.237700 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:13.238266 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:56:ae", ip: ""} in network mk-old-k8s-version-454279: {Iface:virbr3 ExpiryTime:2025-09-08 15:47:00 +0000 UTC Type:0 Mac:52:54:00:78:56:ae Iaid: IPaddr:192.168.50.48 Prefix:24 Hostname:old-k8s-version-454279 Clientid:01:52:54:00:78:56:ae}
	I0908 14:47:13.238303 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined IP address 192.168.50.48 and MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:13.238541 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .DriverName
	I0908 14:47:13.239356 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .DriverName
	I0908 14:47:13.239606 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .DriverName
	I0908 14:47:13.239737 1160669 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0908 14:47:13.239792 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHHostname
	I0908 14:47:13.239871 1160669 ssh_runner.go:195] Run: cat /version.json
	I0908 14:47:13.239905 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHHostname
	I0908 14:47:13.243385 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:13.243476 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:13.243941 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:56:ae", ip: ""} in network mk-old-k8s-version-454279: {Iface:virbr3 ExpiryTime:2025-09-08 15:47:00 +0000 UTC Type:0 Mac:52:54:00:78:56:ae Iaid: IPaddr:192.168.50.48 Prefix:24 Hostname:old-k8s-version-454279 Clientid:01:52:54:00:78:56:ae}
	I0908 14:47:13.243993 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined IP address 192.168.50.48 and MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:13.244146 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:56:ae", ip: ""} in network mk-old-k8s-version-454279: {Iface:virbr3 ExpiryTime:2025-09-08 15:47:00 +0000 UTC Type:0 Mac:52:54:00:78:56:ae Iaid: IPaddr:192.168.50.48 Prefix:24 Hostname:old-k8s-version-454279 Clientid:01:52:54:00:78:56:ae}
	I0908 14:47:13.244186 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined IP address 192.168.50.48 and MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:13.244280 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHPort
	I0908 14:47:13.244427 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHPort
	I0908 14:47:13.244566 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHKeyPath
	I0908 14:47:13.244670 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHKeyPath
	I0908 14:47:13.244696 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHUsername
	I0908 14:47:13.244876 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHUsername
	I0908 14:47:13.244964 1160669 sshutil.go:53] new ssh client: &{IP:192.168.50.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/old-k8s-version-454279/id_rsa Username:docker}
	I0908 14:47:13.245000 1160669 sshutil.go:53] new ssh client: &{IP:192.168.50.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/old-k8s-version-454279/id_rsa Username:docker}
	I0908 14:47:13.343174 1160669 ssh_runner.go:195] Run: systemctl --version
	I0908 14:47:13.377738 1160669 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0908 14:47:09.242571 1161554 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 14:47:09.242614 1161554 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0908 14:47:09.242628 1161554 cache.go:58] Caching tarball of preloaded images
	I0908 14:47:09.242720 1161554 preload.go:172] Found /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0908 14:47:09.242733 1161554 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0908 14:47:09.242856 1161554 profile.go:143] Saving config to /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/embed-certs-372004/config.json ...
	I0908 14:47:09.242886 1161554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/embed-certs-372004/config.json: {Name:mk36cbfc5ffff3b9800a8cb272fb6fc4e8a2f5b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:47:09.243050 1161554 start.go:360] acquireMachinesLock for embed-certs-372004: {Name:mk0626ae9b324aeb819357e3de70b05b9e4c30a5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0908 14:47:13.551387 1160669 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0908 14:47:13.560862 1160669 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0908 14:47:13.560956 1160669 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 14:47:13.585985 1160669 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0908 14:47:13.586024 1160669 start.go:495] detecting cgroup driver to use...
	I0908 14:47:13.586136 1160669 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0908 14:47:13.609341 1160669 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 14:47:13.630973 1160669 docker.go:218] disabling cri-docker service (if available) ...
	I0908 14:47:13.631082 1160669 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0908 14:47:13.651272 1160669 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0908 14:47:13.673082 1160669 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0908 14:47:13.830972 1160669 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0908 14:47:14.012858 1160669 docker.go:234] disabling docker service ...
	I0908 14:47:14.012936 1160669 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0908 14:47:14.034138 1160669 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0908 14:47:14.056076 1160669 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0908 14:47:14.298395 1160669 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0908 14:47:14.461146 1160669 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 14:47:14.479862 1160669 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 14:47:14.508390 1160669 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0908 14:47:14.508479 1160669 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 14:47:14.523751 1160669 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0908 14:47:14.523871 1160669 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 14:47:14.539963 1160669 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 14:47:14.555827 1160669 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 14:47:14.571980 1160669 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 14:47:14.589217 1160669 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 14:47:14.604726 1160669 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 14:47:14.636771 1160669 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 14:47:14.651552 1160669 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 14:47:14.665337 1160669 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0908 14:47:14.665417 1160669 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0908 14:47:14.690509 1160669 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 14:47:14.705109 1160669 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 14:47:14.866883 1160669 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0908 14:47:14.999587 1160669 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0908 14:47:14.999709 1160669 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0908 14:47:15.005990 1160669 start.go:563] Will wait 60s for crictl version
	I0908 14:47:15.006108 1160669 ssh_runner.go:195] Run: which crictl
	I0908 14:47:15.011598 1160669 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 14:47:15.061672 1160669 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0908 14:47:15.061784 1160669 ssh_runner.go:195] Run: crio --version
	I0908 14:47:15.096342 1160669 ssh_runner.go:195] Run: crio --version
	I0908 14:47:15.158474 1160669 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.29.1 ...
	I0908 14:47:13.236765 1161065 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0908 14:47:13.237043 1161065 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 14:47:13.237097 1161065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 14:47:13.257879 1161065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39459
	I0908 14:47:13.258443 1161065 main.go:141] libmachine: () Calling .GetVersion
	I0908 14:47:13.259015 1161065 main.go:141] libmachine: Using API Version  1
	I0908 14:47:13.259044 1161065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 14:47:13.259491 1161065 main.go:141] libmachine: () Calling .GetMachineName
	I0908 14:47:13.259748 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetMachineName
	I0908 14:47:13.259917 1161065 main.go:141] libmachine: (no-preload-301894) Calling .DriverName
	I0908 14:47:13.260103 1161065 start.go:159] libmachine.API.Create for "no-preload-301894" (driver="kvm2")
	I0908 14:47:13.260133 1161065 client.go:168] LocalClient.Create starting
	I0908 14:47:13.260171 1161065 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca.pem
	I0908 14:47:13.260211 1161065 main.go:141] libmachine: Decoding PEM data...
	I0908 14:47:13.260226 1161065 main.go:141] libmachine: Parsing certificate...
	I0908 14:47:13.260300 1161065 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/cert.pem
	I0908 14:47:13.260321 1161065 main.go:141] libmachine: Decoding PEM data...
	I0908 14:47:13.260332 1161065 main.go:141] libmachine: Parsing certificate...
	I0908 14:47:13.260346 1161065 main.go:141] libmachine: Running pre-create checks...
	I0908 14:47:13.260354 1161065 main.go:141] libmachine: (no-preload-301894) Calling .PreCreateCheck
	I0908 14:47:13.260713 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetConfigRaw
	I0908 14:47:13.261185 1161065 main.go:141] libmachine: Creating machine...
	I0908 14:47:13.261200 1161065 main.go:141] libmachine: (no-preload-301894) Calling .Create
	I0908 14:47:13.261374 1161065 main.go:141] libmachine: (no-preload-301894) creating KVM machine...
	I0908 14:47:13.261395 1161065 main.go:141] libmachine: (no-preload-301894) creating network...
	I0908 14:47:13.262893 1161065 main.go:141] libmachine: (no-preload-301894) DBG | found existing default KVM network
	I0908 14:47:13.264043 1161065 main.go:141] libmachine: (no-preload-301894) DBG | I0908 14:47:13.263851 1161595 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000013a80}
	I0908 14:47:13.264081 1161065 main.go:141] libmachine: (no-preload-301894) DBG | created network xml: 
	I0908 14:47:13.264101 1161065 main.go:141] libmachine: (no-preload-301894) DBG | <network>
	I0908 14:47:13.264114 1161065 main.go:141] libmachine: (no-preload-301894) DBG |   <name>mk-no-preload-301894</name>
	I0908 14:47:13.264124 1161065 main.go:141] libmachine: (no-preload-301894) DBG |   <dns enable='no'/>
	I0908 14:47:13.264134 1161065 main.go:141] libmachine: (no-preload-301894) DBG |   
	I0908 14:47:13.264149 1161065 main.go:141] libmachine: (no-preload-301894) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0908 14:47:13.264160 1161065 main.go:141] libmachine: (no-preload-301894) DBG |     <dhcp>
	I0908 14:47:13.264170 1161065 main.go:141] libmachine: (no-preload-301894) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0908 14:47:13.264183 1161065 main.go:141] libmachine: (no-preload-301894) DBG |     </dhcp>
	I0908 14:47:13.264193 1161065 main.go:141] libmachine: (no-preload-301894) DBG |   </ip>
	I0908 14:47:13.264204 1161065 main.go:141] libmachine: (no-preload-301894) DBG |   
	I0908 14:47:13.264215 1161065 main.go:141] libmachine: (no-preload-301894) DBG | </network>
	I0908 14:47:13.264229 1161065 main.go:141] libmachine: (no-preload-301894) DBG | 
	I0908 14:47:13.270638 1161065 main.go:141] libmachine: (no-preload-301894) DBG | trying to create private KVM network mk-no-preload-301894 192.168.39.0/24...
	I0908 14:47:13.368149 1161065 main.go:141] libmachine: (no-preload-301894) DBG | private KVM network mk-no-preload-301894 192.168.39.0/24 created
	I0908 14:47:13.368182 1161065 main.go:141] libmachine: (no-preload-301894) DBG | I0908 14:47:13.368076 1161595 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21508-1116714/.minikube
	I0908 14:47:13.368195 1161065 main.go:141] libmachine: (no-preload-301894) setting up store path in /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/no-preload-301894 ...
	I0908 14:47:13.368219 1161065 main.go:141] libmachine: (no-preload-301894) building disk image from file:///home/jenkins/minikube-integration/21508-1116714/.minikube/cache/iso/amd64/minikube-v1.36.0-1756980912-21488-amd64.iso
	I0908 14:47:13.368234 1161065 main.go:141] libmachine: (no-preload-301894) Downloading /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21508-1116714/.minikube/cache/iso/amd64/minikube-v1.36.0-1756980912-21488-amd64.iso...
	I0908 14:47:13.708843 1161065 main.go:141] libmachine: (no-preload-301894) DBG | I0908 14:47:13.708657 1161595 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/no-preload-301894/id_rsa...
	I0908 14:47:13.876885 1161065 main.go:141] libmachine: (no-preload-301894) DBG | I0908 14:47:13.876750 1161595 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/no-preload-301894/no-preload-301894.rawdisk...
	I0908 14:47:13.876910 1161065 main.go:141] libmachine: (no-preload-301894) DBG | Writing magic tar header
	I0908 14:47:13.876924 1161065 main.go:141] libmachine: (no-preload-301894) DBG | Writing SSH key tar header
	I0908 14:47:13.877045 1161065 main.go:141] libmachine: (no-preload-301894) DBG | I0908 14:47:13.876948 1161595 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/no-preload-301894 ...
	I0908 14:47:13.877145 1161065 main.go:141] libmachine: (no-preload-301894) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/no-preload-301894
	I0908 14:47:13.877178 1161065 main.go:141] libmachine: (no-preload-301894) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21508-1116714/.minikube/machines
	I0908 14:47:13.877201 1161065 main.go:141] libmachine: (no-preload-301894) setting executable bit set on /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/no-preload-301894 (perms=drwx------)
	I0908 14:47:13.877215 1161065 main.go:141] libmachine: (no-preload-301894) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21508-1116714/.minikube
	I0908 14:47:13.877231 1161065 main.go:141] libmachine: (no-preload-301894) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21508-1116714
	I0908 14:47:13.877244 1161065 main.go:141] libmachine: (no-preload-301894) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0908 14:47:13.877258 1161065 main.go:141] libmachine: (no-preload-301894) DBG | checking permissions on dir: /home/jenkins
	I0908 14:47:13.877270 1161065 main.go:141] libmachine: (no-preload-301894) DBG | checking permissions on dir: /home
	I0908 14:47:13.877284 1161065 main.go:141] libmachine: (no-preload-301894) setting executable bit set on /home/jenkins/minikube-integration/21508-1116714/.minikube/machines (perms=drwxr-xr-x)
	I0908 14:47:13.877309 1161065 main.go:141] libmachine: (no-preload-301894) setting executable bit set on /home/jenkins/minikube-integration/21508-1116714/.minikube (perms=drwxr-xr-x)
	I0908 14:47:13.877354 1161065 main.go:141] libmachine: (no-preload-301894) DBG | skipping /home - not owner
	I0908 14:47:13.877374 1161065 main.go:141] libmachine: (no-preload-301894) setting executable bit set on /home/jenkins/minikube-integration/21508-1116714 (perms=drwxrwxr-x)
	I0908 14:47:13.877390 1161065 main.go:141] libmachine: (no-preload-301894) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0908 14:47:13.877402 1161065 main.go:141] libmachine: (no-preload-301894) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0908 14:47:13.877414 1161065 main.go:141] libmachine: (no-preload-301894) creating domain...
	I0908 14:47:13.878601 1161065 main.go:141] libmachine: (no-preload-301894) define libvirt domain using xml: 
	I0908 14:47:13.878635 1161065 main.go:141] libmachine: (no-preload-301894) <domain type='kvm'>
	I0908 14:47:13.878647 1161065 main.go:141] libmachine: (no-preload-301894)   <name>no-preload-301894</name>
	I0908 14:47:13.878661 1161065 main.go:141] libmachine: (no-preload-301894)   <memory unit='MiB'>3072</memory>
	I0908 14:47:13.878671 1161065 main.go:141] libmachine: (no-preload-301894)   <vcpu>2</vcpu>
	I0908 14:47:13.878677 1161065 main.go:141] libmachine: (no-preload-301894)   <features>
	I0908 14:47:13.878688 1161065 main.go:141] libmachine: (no-preload-301894)     <acpi/>
	I0908 14:47:13.878697 1161065 main.go:141] libmachine: (no-preload-301894)     <apic/>
	I0908 14:47:13.878706 1161065 main.go:141] libmachine: (no-preload-301894)     <pae/>
	I0908 14:47:13.878715 1161065 main.go:141] libmachine: (no-preload-301894)     
	I0908 14:47:13.878725 1161065 main.go:141] libmachine: (no-preload-301894)   </features>
	I0908 14:47:13.878735 1161065 main.go:141] libmachine: (no-preload-301894)   <cpu mode='host-passthrough'>
	I0908 14:47:13.878743 1161065 main.go:141] libmachine: (no-preload-301894)   
	I0908 14:47:13.878752 1161065 main.go:141] libmachine: (no-preload-301894)   </cpu>
	I0908 14:47:13.878784 1161065 main.go:141] libmachine: (no-preload-301894)   <os>
	I0908 14:47:13.878810 1161065 main.go:141] libmachine: (no-preload-301894)     <type>hvm</type>
	I0908 14:47:13.878822 1161065 main.go:141] libmachine: (no-preload-301894)     <boot dev='cdrom'/>
	I0908 14:47:13.878829 1161065 main.go:141] libmachine: (no-preload-301894)     <boot dev='hd'/>
	I0908 14:47:13.878842 1161065 main.go:141] libmachine: (no-preload-301894)     <bootmenu enable='no'/>
	I0908 14:47:13.878851 1161065 main.go:141] libmachine: (no-preload-301894)   </os>
	I0908 14:47:13.878859 1161065 main.go:141] libmachine: (no-preload-301894)   <devices>
	I0908 14:47:13.878869 1161065 main.go:141] libmachine: (no-preload-301894)     <disk type='file' device='cdrom'>
	I0908 14:47:13.878887 1161065 main.go:141] libmachine: (no-preload-301894)       <source file='/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/no-preload-301894/boot2docker.iso'/>
	I0908 14:47:13.878903 1161065 main.go:141] libmachine: (no-preload-301894)       <target dev='hdc' bus='scsi'/>
	I0908 14:47:13.878914 1161065 main.go:141] libmachine: (no-preload-301894)       <readonly/>
	I0908 14:47:13.878924 1161065 main.go:141] libmachine: (no-preload-301894)     </disk>
	I0908 14:47:13.878934 1161065 main.go:141] libmachine: (no-preload-301894)     <disk type='file' device='disk'>
	I0908 14:47:13.878947 1161065 main.go:141] libmachine: (no-preload-301894)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0908 14:47:13.878963 1161065 main.go:141] libmachine: (no-preload-301894)       <source file='/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/no-preload-301894/no-preload-301894.rawdisk'/>
	I0908 14:47:13.878973 1161065 main.go:141] libmachine: (no-preload-301894)       <target dev='hda' bus='virtio'/>
	I0908 14:47:13.879001 1161065 main.go:141] libmachine: (no-preload-301894)     </disk>
	I0908 14:47:13.879031 1161065 main.go:141] libmachine: (no-preload-301894)     <interface type='network'>
	I0908 14:47:13.879042 1161065 main.go:141] libmachine: (no-preload-301894)       <source network='mk-no-preload-301894'/>
	I0908 14:47:13.879050 1161065 main.go:141] libmachine: (no-preload-301894)       <model type='virtio'/>
	I0908 14:47:13.879059 1161065 main.go:141] libmachine: (no-preload-301894)     </interface>
	I0908 14:47:13.879069 1161065 main.go:141] libmachine: (no-preload-301894)     <interface type='network'>
	I0908 14:47:13.879079 1161065 main.go:141] libmachine: (no-preload-301894)       <source network='default'/>
	I0908 14:47:13.879090 1161065 main.go:141] libmachine: (no-preload-301894)       <model type='virtio'/>
	I0908 14:47:13.879100 1161065 main.go:141] libmachine: (no-preload-301894)     </interface>
	I0908 14:47:13.879110 1161065 main.go:141] libmachine: (no-preload-301894)     <serial type='pty'>
	I0908 14:47:13.879122 1161065 main.go:141] libmachine: (no-preload-301894)       <target port='0'/>
	I0908 14:47:13.879133 1161065 main.go:141] libmachine: (no-preload-301894)     </serial>
	I0908 14:47:13.879143 1161065 main.go:141] libmachine: (no-preload-301894)     <console type='pty'>
	I0908 14:47:13.879153 1161065 main.go:141] libmachine: (no-preload-301894)       <target type='serial' port='0'/>
	I0908 14:47:13.879160 1161065 main.go:141] libmachine: (no-preload-301894)     </console>
	I0908 14:47:13.879165 1161065 main.go:141] libmachine: (no-preload-301894)     <rng model='virtio'>
	I0908 14:47:13.879173 1161065 main.go:141] libmachine: (no-preload-301894)       <backend model='random'>/dev/random</backend>
	I0908 14:47:13.879181 1161065 main.go:141] libmachine: (no-preload-301894)     </rng>
	I0908 14:47:13.879201 1161065 main.go:141] libmachine: (no-preload-301894)     
	I0908 14:47:13.879210 1161065 main.go:141] libmachine: (no-preload-301894)     
	I0908 14:47:13.879231 1161065 main.go:141] libmachine: (no-preload-301894)   </devices>
	I0908 14:47:13.879251 1161065 main.go:141] libmachine: (no-preload-301894) </domain>
	I0908 14:47:13.879284 1161065 main.go:141] libmachine: (no-preload-301894) 
	I0908 14:47:13.884517 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:fd:a3:0d in network default
	I0908 14:47:13.885269 1161065 main.go:141] libmachine: (no-preload-301894) starting domain...
	I0908 14:47:13.885298 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:13.885316 1161065 main.go:141] libmachine: (no-preload-301894) ensuring networks are active...
	I0908 14:47:13.886202 1161065 main.go:141] libmachine: (no-preload-301894) Ensuring network default is active
	I0908 14:47:13.886570 1161065 main.go:141] libmachine: (no-preload-301894) Ensuring network mk-no-preload-301894 is active
	I0908 14:47:13.887171 1161065 main.go:141] libmachine: (no-preload-301894) getting domain XML...
	I0908 14:47:13.888178 1161065 main.go:141] libmachine: (no-preload-301894) creating domain...
	I0908 14:47:14.279275 1161065 main.go:141] libmachine: (no-preload-301894) waiting for IP...
	I0908 14:47:14.280366 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:14.280906 1161065 main.go:141] libmachine: (no-preload-301894) DBG | unable to find current IP address of domain no-preload-301894 in network mk-no-preload-301894
	I0908 14:47:14.280940 1161065 main.go:141] libmachine: (no-preload-301894) DBG | I0908 14:47:14.280885 1161595 retry.go:31] will retry after 299.887118ms: waiting for domain to come up
	I0908 14:47:14.582745 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:14.583325 1161065 main.go:141] libmachine: (no-preload-301894) DBG | unable to find current IP address of domain no-preload-301894 in network mk-no-preload-301894
	I0908 14:47:14.583356 1161065 main.go:141] libmachine: (no-preload-301894) DBG | I0908 14:47:14.583297 1161595 retry.go:31] will retry after 249.657328ms: waiting for domain to come up
	I0908 14:47:14.834783 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:14.835389 1161065 main.go:141] libmachine: (no-preload-301894) DBG | unable to find current IP address of domain no-preload-301894 in network mk-no-preload-301894
	I0908 14:47:14.835426 1161065 main.go:141] libmachine: (no-preload-301894) DBG | I0908 14:47:14.835339 1161595 retry.go:31] will retry after 436.07914ms: waiting for domain to come up
	I0908 14:47:15.273234 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:15.273849 1161065 main.go:141] libmachine: (no-preload-301894) DBG | unable to find current IP address of domain no-preload-301894 in network mk-no-preload-301894
	I0908 14:47:15.273905 1161065 main.go:141] libmachine: (no-preload-301894) DBG | I0908 14:47:15.273842 1161595 retry.go:31] will retry after 388.986383ms: waiting for domain to come up
	I0908 14:47:15.664745 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:15.665480 1161065 main.go:141] libmachine: (no-preload-301894) DBG | unable to find current IP address of domain no-preload-301894 in network mk-no-preload-301894
	I0908 14:47:15.665516 1161065 main.go:141] libmachine: (no-preload-301894) DBG | I0908 14:47:15.665454 1161595 retry.go:31] will retry after 697.087111ms: waiting for domain to come up
	I0908 14:47:16.364223 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:16.364917 1161065 main.go:141] libmachine: (no-preload-301894) DBG | unable to find current IP address of domain no-preload-301894 in network mk-no-preload-301894
	I0908 14:47:16.364953 1161065 main.go:141] libmachine: (no-preload-301894) DBG | I0908 14:47:16.364892 1161595 retry.go:31] will retry after 932.556534ms: waiting for domain to come up
	I0908 14:47:15.230993 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetIP
	I0908 14:47:15.234315 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:15.234723 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:56:ae", ip: ""} in network mk-old-k8s-version-454279: {Iface:virbr3 ExpiryTime:2025-09-08 15:47:00 +0000 UTC Type:0 Mac:52:54:00:78:56:ae Iaid: IPaddr:192.168.50.48 Prefix:24 Hostname:old-k8s-version-454279 Clientid:01:52:54:00:78:56:ae}
	I0908 14:47:15.234760 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined IP address 192.168.50.48 and MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:15.234980 1160669 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0908 14:47:15.240407 1160669 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 14:47:15.259093 1160669 kubeadm.go:875] updating cluster {Name:old-k8s-version-454279 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.0 ClusterName:old-k8s-version-454279 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.48 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0908 14:47:15.259235 1160669 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0908 14:47:15.259281 1160669 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 14:47:15.301882 1160669 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.0". assuming images are not preloaded.
	I0908 14:47:15.301959 1160669 ssh_runner.go:195] Run: which lz4
	I0908 14:47:15.307335 1160669 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0908 14:47:15.313251 1160669 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0908 14:47:15.313305 1160669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457056555 bytes)
	I0908 14:47:17.472852 1160669 crio.go:462] duration metric: took 2.165558075s to copy over tarball
	I0908 14:47:17.472961 1160669 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0908 14:47:19.628401 1160669 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.155410467s)
	I0908 14:47:19.628432 1160669 crio.go:469] duration metric: took 2.155544498s to extract the tarball
	I0908 14:47:19.628440 1160669 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0908 14:47:19.675281 1160669 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 14:47:19.727396 1160669 crio.go:514] all images are preloaded for cri-o runtime.
	I0908 14:47:19.727429 1160669 cache_images.go:85] Images are preloaded, skipping loading
	I0908 14:47:19.727440 1160669 kubeadm.go:926] updating node { 192.168.50.48 8443 v1.28.0 crio true true} ...
	I0908 14:47:19.727610 1160669 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=old-k8s-version-454279 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.48
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-454279 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0908 14:47:19.727733 1160669 ssh_runner.go:195] Run: crio config
	I0908 14:47:19.779753 1160669 cni.go:84] Creating CNI manager for ""
	I0908 14:47:19.779853 1160669 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0908 14:47:19.779879 1160669 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0908 14:47:19.779945 1160669 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.48 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-454279 NodeName:old-k8s-version-454279 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0908 14:47:19.780270 1160669 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.48
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-454279"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.48
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.48"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0908 14:47:19.780390 1160669 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I0908 14:47:19.793661 1160669 binaries.go:44] Found k8s binaries, skipping transfer
	I0908 14:47:19.793772 1160669 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0908 14:47:19.806295 1160669 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0908 14:47:19.830043 1160669 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0908 14:47:19.854231 1160669 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I0908 14:47:19.877290 1160669 ssh_runner.go:195] Run: grep 192.168.50.48	control-plane.minikube.internal$ /etc/hosts
	I0908 14:47:19.882225 1160669 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.48	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 14:47:19.898708 1160669 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 14:47:20.072508 1160669 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 14:47:20.115151 1160669 certs.go:68] Setting up /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279 for IP: 192.168.50.48
	I0908 14:47:20.115180 1160669 certs.go:194] generating shared ca certs ...
	I0908 14:47:20.115201 1160669 certs.go:226] acquiring lock for ca certs: {Name:mk10dcd85eee4d8b0413bd848f61156bf964b6ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:47:20.115429 1160669 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21508-1116714/.minikube/ca.key
	I0908 14:47:20.115510 1160669 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21508-1116714/.minikube/proxy-client-ca.key
	I0908 14:47:20.115532 1160669 certs.go:256] generating profile certs ...
	I0908 14:47:20.115621 1160669 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/client.key
	I0908 14:47:20.115645 1160669 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/client.crt with IP's: []
	I0908 14:47:20.293700 1160669 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/client.crt ...
	I0908 14:47:20.293741 1160669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/client.crt: {Name:mk206ca7f18f3cdbac0fc6bdbd1f7a44a1300b83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:47:20.293963 1160669 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/client.key ...
	I0908 14:47:20.293983 1160669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/client.key: {Name:mk2f6e6e643bf72cd3b7e7fd62b6e0345a3d0b4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:47:20.294237 1160669 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/apiserver.key.ed44818c
	I0908 14:47:20.294268 1160669 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/apiserver.crt.ed44818c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.48]
	I0908 14:47:20.334022 1160669 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/apiserver.crt.ed44818c ...
	I0908 14:47:20.334063 1160669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/apiserver.crt.ed44818c: {Name:mka3682baa7d5ffca313ea6762fc49d2c8e24276 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:47:20.334247 1160669 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/apiserver.key.ed44818c ...
	I0908 14:47:20.334264 1160669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/apiserver.key.ed44818c: {Name:mk503685508fa39889cb4dda79781df5950a1ee8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:47:20.334366 1160669 certs.go:381] copying /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/apiserver.crt.ed44818c -> /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/apiserver.crt
	I0908 14:47:20.334483 1160669 certs.go:385] copying /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/apiserver.key.ed44818c -> /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/apiserver.key
	I0908 14:47:20.334579 1160669 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/proxy-client.key
	I0908 14:47:20.334609 1160669 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/proxy-client.crt with IP's: []
	I0908 14:47:20.583404 1160669 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/proxy-client.crt ...
	I0908 14:47:20.583440 1160669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/proxy-client.crt: {Name:mk3dfdd9b5abba8bdc7d1a726f96ef5fb2519b7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:47:20.583668 1160669 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/proxy-client.key ...
	I0908 14:47:20.583686 1160669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/proxy-client.key: {Name:mkf3c224e3a6d70be668ea603104347ec1607f19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:47:20.583890 1160669 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/1120875.pem (1338 bytes)
	W0908 14:47:20.583949 1160669 certs.go:480] ignoring /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/1120875_empty.pem, impossibly tiny 0 bytes
	I0908 14:47:20.583966 1160669 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca-key.pem (1679 bytes)
	I0908 14:47:20.584008 1160669 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca.pem (1082 bytes)
	I0908 14:47:20.584051 1160669 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/cert.pem (1123 bytes)
	I0908 14:47:20.584090 1160669 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/key.pem (1675 bytes)
	I0908 14:47:20.584150 1160669 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-1116714/.minikube/files/etc/ssl/certs/11208752.pem (1708 bytes)
	I0908 14:47:20.584833 1160669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0908 14:47:20.619765 1160669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0908 14:47:20.656093 1160669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0908 14:47:20.690850 1160669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0908 14:47:20.725652 1160669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0908 14:47:20.762769 1160669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0908 14:47:20.798926 1160669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0908 14:47:20.839948 1160669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0908 14:47:20.887765 1160669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/files/etc/ssl/certs/11208752.pem --> /usr/share/ca-certificates/11208752.pem (1708 bytes)
	I0908 14:47:20.923331 1160669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0908 14:47:20.959241 1160669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/1120875.pem --> /usr/share/ca-certificates/1120875.pem (1338 bytes)
	I0908 14:47:20.993186 1160669 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0908 14:47:21.017923 1160669 ssh_runner.go:195] Run: openssl version
	I0908 14:47:21.025582 1160669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0908 14:47:21.040260 1160669 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0908 14:47:21.046850 1160669 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  8 13:35 /usr/share/ca-certificates/minikubeCA.pem
	I0908 14:47:21.046933 1160669 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0908 14:47:21.056488 1160669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0908 14:47:21.072758 1160669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1120875.pem && ln -fs /usr/share/ca-certificates/1120875.pem /etc/ssl/certs/1120875.pem"
	I0908 14:47:21.089250 1160669 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1120875.pem
	I0908 14:47:21.097570 1160669 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  8 13:46 /usr/share/ca-certificates/1120875.pem
	I0908 14:47:21.097654 1160669 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1120875.pem
	I0908 14:47:21.109209 1160669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1120875.pem /etc/ssl/certs/51391683.0"
	I0908 14:47:21.124664 1160669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11208752.pem && ln -fs /usr/share/ca-certificates/11208752.pem /etc/ssl/certs/11208752.pem"
	I0908 14:47:21.145131 1160669 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11208752.pem
	I0908 14:47:21.151534 1160669 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  8 13:46 /usr/share/ca-certificates/11208752.pem
	I0908 14:47:21.151623 1160669 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11208752.pem
	I0908 14:47:21.160393 1160669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11208752.pem /etc/ssl/certs/3ec20f2e.0"
	I0908 14:47:21.176948 1160669 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0908 14:47:21.183123 1160669 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0908 14:47:21.183194 1160669 kubeadm.go:392] StartCluster: {Name:old-k8s-version-454279 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.0 ClusterName:old-k8s-version-454279 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.48 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 14:47:21.183299 1160669 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0908 14:47:21.183369 1160669 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0908 14:47:21.234235 1160669 cri.go:89] found id: ""
	I0908 14:47:21.234343 1160669 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0908 14:47:21.248156 1160669 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0908 14:47:21.264544 1160669 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0908 14:47:21.279114 1160669 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0908 14:47:21.279139 1160669 kubeadm.go:157] found existing configuration files:
	
	I0908 14:47:21.279216 1160669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0908 14:47:21.295113 1160669 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0908 14:47:21.295199 1160669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0908 14:47:21.311854 1160669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0908 14:47:21.326337 1160669 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0908 14:47:21.326413 1160669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0908 14:47:21.340653 1160669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0908 14:47:21.354985 1160669 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0908 14:47:21.355081 1160669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0908 14:47:21.369972 1160669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0908 14:47:21.385749 1160669 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0908 14:47:21.385834 1160669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0908 14:47:21.400297 1160669 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0908 14:47:21.469564 1160669 kubeadm.go:310] [init] Using Kubernetes version: v1.28.0
	I0908 14:47:21.469626 1160669 kubeadm.go:310] [preflight] Running pre-flight checks
	I0908 14:47:21.629916 1160669 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0908 14:47:21.630068 1160669 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0908 14:47:21.630197 1160669 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0908 14:47:21.876317 1160669 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0908 14:47:17.299958 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:17.300515 1161065 main.go:141] libmachine: (no-preload-301894) DBG | unable to find current IP address of domain no-preload-301894 in network mk-no-preload-301894
	I0908 14:47:17.300546 1161065 main.go:141] libmachine: (no-preload-301894) DBG | I0908 14:47:17.300489 1161595 retry.go:31] will retry after 873.277523ms: waiting for domain to come up
	I0908 14:47:18.175055 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:18.175479 1161065 main.go:141] libmachine: (no-preload-301894) DBG | unable to find current IP address of domain no-preload-301894 in network mk-no-preload-301894
	I0908 14:47:18.175543 1161065 main.go:141] libmachine: (no-preload-301894) DBG | I0908 14:47:18.175449 1161595 retry.go:31] will retry after 1.230605044s: waiting for domain to come up
	I0908 14:47:19.408231 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:19.408892 1161065 main.go:141] libmachine: (no-preload-301894) DBG | unable to find current IP address of domain no-preload-301894 in network mk-no-preload-301894
	I0908 14:47:19.408972 1161065 main.go:141] libmachine: (no-preload-301894) DBG | I0908 14:47:19.408874 1161595 retry.go:31] will retry after 1.41166106s: waiting for domain to come up
	I0908 14:47:20.822687 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:20.823353 1161065 main.go:141] libmachine: (no-preload-301894) DBG | unable to find current IP address of domain no-preload-301894 in network mk-no-preload-301894
	I0908 14:47:20.823388 1161065 main.go:141] libmachine: (no-preload-301894) DBG | I0908 14:47:20.823291 1161595 retry.go:31] will retry after 1.869801403s: waiting for domain to come up
	I0908 14:47:21.994887 1160669 out.go:252]   - Generating certificates and keys ...
	I0908 14:47:21.995014 1160669 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0908 14:47:21.995109 1160669 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0908 14:47:22.063280 1160669 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0908 14:47:22.401923 1160669 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0908 14:47:22.676005 1160669 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0908 14:47:22.728731 1160669 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0908 14:47:23.071243 1160669 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0908 14:47:23.071579 1160669 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-454279] and IPs [192.168.50.48 127.0.0.1 ::1]
	I0908 14:47:23.563705 1160669 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0908 14:47:23.563931 1160669 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-454279] and IPs [192.168.50.48 127.0.0.1 ::1]
	I0908 14:47:23.759378 1160669 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0908 14:47:24.010383 1160669 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0908 14:47:24.263976 1160669 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0908 14:47:24.265614 1160669 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0908 14:47:24.463358 1160669 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0908 14:47:24.675739 1160669 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0908 14:47:24.953446 1160669 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0908 14:47:25.072515 1160669 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0908 14:47:25.072999 1160669 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0908 14:47:25.075705 1160669 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0908 14:47:22.695776 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:22.696339 1161065 main.go:141] libmachine: (no-preload-301894) DBG | unable to find current IP address of domain no-preload-301894 in network mk-no-preload-301894
	I0908 14:47:22.696364 1161065 main.go:141] libmachine: (no-preload-301894) DBG | I0908 14:47:22.696301 1161595 retry.go:31] will retry after 2.848523465s: waiting for domain to come up
	I0908 14:47:25.546633 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:25.547260 1161065 main.go:141] libmachine: (no-preload-301894) DBG | unable to find current IP address of domain no-preload-301894 in network mk-no-preload-301894
	I0908 14:47:25.547300 1161065 main.go:141] libmachine: (no-preload-301894) DBG | I0908 14:47:25.547216 1161595 retry.go:31] will retry after 3.223127324s: waiting for domain to come up
	I0908 14:47:25.078393 1160669 out.go:252]   - Booting up control plane ...
	I0908 14:47:25.078534 1160669 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0908 14:47:25.078627 1160669 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0908 14:47:25.078715 1160669 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0908 14:47:25.112469 1160669 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0908 14:47:25.113621 1160669 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0908 14:47:25.113770 1160669 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0908 14:47:25.322425 1160669 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0908 14:47:28.772513 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:28.773199 1161065 main.go:141] libmachine: (no-preload-301894) DBG | unable to find current IP address of domain no-preload-301894 in network mk-no-preload-301894
	I0908 14:47:28.773274 1161065 main.go:141] libmachine: (no-preload-301894) DBG | I0908 14:47:28.773158 1161595 retry.go:31] will retry after 3.561518321s: waiting for domain to come up
	I0908 14:47:31.822840 1160669 kubeadm.go:310] [apiclient] All control plane components are healthy after 6.503403 seconds
	I0908 14:47:31.822974 1160669 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0908 14:47:31.843405 1160669 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0908 14:47:32.384853 1160669 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0908 14:47:32.385121 1160669 kubeadm.go:310] [mark-control-plane] Marking the node old-k8s-version-454279 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0908 14:47:32.899251 1160669 kubeadm.go:310] [bootstrap-token] Using token: qk5t9l.4qbiul1i99fdbzyv
	I0908 14:47:32.900654 1160669 out.go:252]   - Configuring RBAC rules ...
	I0908 14:47:32.900828 1160669 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0908 14:47:32.910356 1160669 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0908 14:47:32.922202 1160669 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0908 14:47:32.925892 1160669 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0908 14:47:32.935405 1160669 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0908 14:47:32.940669 1160669 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0908 14:47:32.959374 1160669 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0908 14:47:33.268156 1160669 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0908 14:47:33.335840 1160669 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0908 14:47:33.338648 1160669 kubeadm.go:310] 
	I0908 14:47:33.338750 1160669 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0908 14:47:33.338764 1160669 kubeadm.go:310] 
	I0908 14:47:33.338898 1160669 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0908 14:47:33.338921 1160669 kubeadm.go:310] 
	I0908 14:47:33.338943 1160669 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0908 14:47:33.339049 1160669 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0908 14:47:33.339152 1160669 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0908 14:47:33.339184 1160669 kubeadm.go:310] 
	I0908 14:47:33.339256 1160669 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0908 14:47:33.339265 1160669 kubeadm.go:310] 
	I0908 14:47:33.339367 1160669 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0908 14:47:33.339388 1160669 kubeadm.go:310] 
	I0908 14:47:33.339469 1160669 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0908 14:47:33.339580 1160669 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0908 14:47:33.339725 1160669 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0908 14:47:33.339745 1160669 kubeadm.go:310] 
	I0908 14:47:33.339881 1160669 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0908 14:47:33.339961 1160669 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0908 14:47:33.339968 1160669 kubeadm.go:310] 
	I0908 14:47:33.340077 1160669 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token qk5t9l.4qbiul1i99fdbzyv \
	I0908 14:47:33.340195 1160669 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b74fdb5b49b8a5f2d0d805722ad58fb11edbe1ed30e10a54ed528060545c93ba \
	I0908 14:47:33.340226 1160669 kubeadm.go:310] 	--control-plane 
	I0908 14:47:33.340235 1160669 kubeadm.go:310] 
	I0908 14:47:33.340367 1160669 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0908 14:47:33.340382 1160669 kubeadm.go:310] 
	I0908 14:47:33.340461 1160669 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token qk5t9l.4qbiul1i99fdbzyv \
	I0908 14:47:33.340556 1160669 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b74fdb5b49b8a5f2d0d805722ad58fb11edbe1ed30e10a54ed528060545c93ba 
	I0908 14:47:33.343383 1160669 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0908 14:47:33.343423 1160669 cni.go:84] Creating CNI manager for ""
	I0908 14:47:33.343431 1160669 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0908 14:47:33.346086 1160669 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0908 14:47:33.347584 1160669 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0908 14:47:32.339013 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:32.339637 1161065 main.go:141] libmachine: (no-preload-301894) DBG | unable to find current IP address of domain no-preload-301894 in network mk-no-preload-301894
	I0908 14:47:32.339691 1161065 main.go:141] libmachine: (no-preload-301894) DBG | I0908 14:47:32.339583 1161595 retry.go:31] will retry after 4.732018081s: waiting for domain to come up
	I0908 14:47:37.073055 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:37.073619 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has current primary IP address 192.168.39.135 and MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:37.073642 1161065 main.go:141] libmachine: (no-preload-301894) found domain IP: 192.168.39.135
	I0908 14:47:37.073661 1161065 main.go:141] libmachine: (no-preload-301894) reserving static IP address...
	I0908 14:47:37.074002 1161065 main.go:141] libmachine: (no-preload-301894) DBG | unable to find host DHCP lease matching {name: "no-preload-301894", mac: "52:54:00:d6:d3:58", ip: "192.168.39.135"} in network mk-no-preload-301894
	I0908 14:47:33.383475 1160669 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0908 14:47:33.454688 1160669 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0908 14:47:33.454771 1160669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 14:47:33.454806 1160669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-454279 minikube.k8s.io/updated_at=2025_09_08T14_47_33_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=3f6dd380c737091fd766d425b85ffa6c4f72b9ba minikube.k8s.io/name=old-k8s-version-454279 minikube.k8s.io/primary=true
	I0908 14:47:33.720581 1160669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 14:47:33.736508 1160669 ops.go:34] apiserver oom_adj: -16
	I0908 14:47:34.221515 1160669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 14:47:34.721604 1160669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 14:47:35.221601 1160669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 14:47:35.721498 1160669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 14:47:36.220816 1160669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 14:47:36.720968 1160669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 14:47:37.221044 1160669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 14:47:37.721319 1160669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 14:47:38.220737 1160669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 14:47:38.934186 1161261 start.go:364] duration metric: took 43.636529867s to acquireMachinesLock for "pause-120061"
	I0908 14:47:38.934281 1161261 start.go:96] Skipping create...Using existing machine configuration
	I0908 14:47:38.934293 1161261 fix.go:54] fixHost starting: 
	I0908 14:47:38.934795 1161261 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 14:47:38.934865 1161261 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 14:47:38.953899 1161261 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33119
	I0908 14:47:38.954585 1161261 main.go:141] libmachine: () Calling .GetVersion
	I0908 14:47:38.955180 1161261 main.go:141] libmachine: Using API Version  1
	I0908 14:47:38.955214 1161261 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 14:47:38.955734 1161261 main.go:141] libmachine: () Calling .GetMachineName
	I0908 14:47:38.955978 1161261 main.go:141] libmachine: (pause-120061) Calling .DriverName
	I0908 14:47:38.956209 1161261 main.go:141] libmachine: (pause-120061) Calling .GetState
	I0908 14:47:38.958177 1161261 fix.go:112] recreateIfNeeded on pause-120061: state=Running err=<nil>
	W0908 14:47:38.958231 1161261 fix.go:138] unexpected machine state, will restart: <nil>
	I0908 14:47:38.960278 1161261 out.go:252] * Updating the running kvm2 "pause-120061" VM ...
	I0908 14:47:38.960324 1161261 machine.go:93] provisionDockerMachine start ...
	I0908 14:47:38.960364 1161261 main.go:141] libmachine: (pause-120061) Calling .DriverName
	I0908 14:47:38.960695 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHHostname
	I0908 14:47:38.964020 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:38.964583 1161261 main.go:141] libmachine: (pause-120061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:49:28", ip: ""} in network mk-pause-120061: {Iface:virbr1 ExpiryTime:2025-09-08 15:45:41 +0000 UTC Type:0 Mac:52:54:00:a0:49:28 Iaid: IPaddr:192.168.61.147 Prefix:24 Hostname:pause-120061 Clientid:01:52:54:00:a0:49:28}
	I0908 14:47:38.964624 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined IP address 192.168.61.147 and MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:38.964874 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHPort
	I0908 14:47:38.965165 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHKeyPath
	I0908 14:47:38.965375 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHKeyPath
	I0908 14:47:38.965541 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHUsername
	I0908 14:47:38.965701 1161261 main.go:141] libmachine: Using SSH client type: native
	I0908 14:47:38.966030 1161261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.61.147 22 <nil> <nil>}
	I0908 14:47:38.966048 1161261 main.go:141] libmachine: About to run SSH command:
	hostname
	I0908 14:47:39.087038 1161261 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-120061
	
	I0908 14:47:39.087094 1161261 main.go:141] libmachine: (pause-120061) Calling .GetMachineName
	I0908 14:47:39.087412 1161261 buildroot.go:166] provisioning hostname "pause-120061"
	I0908 14:47:39.087435 1161261 main.go:141] libmachine: (pause-120061) Calling .GetMachineName
	I0908 14:47:39.087596 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHHostname
	I0908 14:47:39.091091 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:39.091719 1161261 main.go:141] libmachine: (pause-120061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:49:28", ip: ""} in network mk-pause-120061: {Iface:virbr1 ExpiryTime:2025-09-08 15:45:41 +0000 UTC Type:0 Mac:52:54:00:a0:49:28 Iaid: IPaddr:192.168.61.147 Prefix:24 Hostname:pause-120061 Clientid:01:52:54:00:a0:49:28}
	I0908 14:47:39.091743 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined IP address 192.168.61.147 and MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:39.092016 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHPort
	I0908 14:47:39.092297 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHKeyPath
	I0908 14:47:39.092524 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHKeyPath
	I0908 14:47:39.092745 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHUsername
	I0908 14:47:39.092990 1161261 main.go:141] libmachine: Using SSH client type: native
	I0908 14:47:39.093266 1161261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.61.147 22 <nil> <nil>}
	I0908 14:47:39.093281 1161261 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-120061 && echo "pause-120061" | sudo tee /etc/hostname
	I0908 14:47:39.231080 1161261 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-120061
	
	I0908 14:47:39.231115 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHHostname
	I0908 14:47:39.234280 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:39.234692 1161261 main.go:141] libmachine: (pause-120061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:49:28", ip: ""} in network mk-pause-120061: {Iface:virbr1 ExpiryTime:2025-09-08 15:45:41 +0000 UTC Type:0 Mac:52:54:00:a0:49:28 Iaid: IPaddr:192.168.61.147 Prefix:24 Hostname:pause-120061 Clientid:01:52:54:00:a0:49:28}
	I0908 14:47:39.234735 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined IP address 192.168.61.147 and MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:39.234995 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHPort
	I0908 14:47:39.235241 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHKeyPath
	I0908 14:47:39.235419 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHKeyPath
	I0908 14:47:39.235543 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHUsername
	I0908 14:47:39.235743 1161261 main.go:141] libmachine: Using SSH client type: native
	I0908 14:47:39.235953 1161261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.61.147 22 <nil> <nil>}
	I0908 14:47:39.235969 1161261 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-120061' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-120061/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-120061' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 14:47:39.358526 1161261 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 14:47:39.358561 1161261 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21508-1116714/.minikube CaCertPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21508-1116714/.minikube}
	I0908 14:47:39.358630 1161261 buildroot.go:174] setting up certificates
	I0908 14:47:39.358646 1161261 provision.go:84] configureAuth start
	I0908 14:47:39.358662 1161261 main.go:141] libmachine: (pause-120061) Calling .GetMachineName
	I0908 14:47:39.359057 1161261 main.go:141] libmachine: (pause-120061) Calling .GetIP
	I0908 14:47:39.362365 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:39.362831 1161261 main.go:141] libmachine: (pause-120061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:49:28", ip: ""} in network mk-pause-120061: {Iface:virbr1 ExpiryTime:2025-09-08 15:45:41 +0000 UTC Type:0 Mac:52:54:00:a0:49:28 Iaid: IPaddr:192.168.61.147 Prefix:24 Hostname:pause-120061 Clientid:01:52:54:00:a0:49:28}
	I0908 14:47:39.362858 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined IP address 192.168.61.147 and MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:39.362995 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHHostname
	I0908 14:47:39.366014 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:39.366565 1161261 main.go:141] libmachine: (pause-120061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:49:28", ip: ""} in network mk-pause-120061: {Iface:virbr1 ExpiryTime:2025-09-08 15:45:41 +0000 UTC Type:0 Mac:52:54:00:a0:49:28 Iaid: IPaddr:192.168.61.147 Prefix:24 Hostname:pause-120061 Clientid:01:52:54:00:a0:49:28}
	I0908 14:47:39.366609 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined IP address 192.168.61.147 and MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:39.366788 1161261 provision.go:143] copyHostCerts
	I0908 14:47:39.366878 1161261 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-1116714/.minikube/ca.pem, removing ...
	I0908 14:47:39.366900 1161261 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-1116714/.minikube/ca.pem
	I0908 14:47:39.366971 1161261 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21508-1116714/.minikube/ca.pem (1082 bytes)
	I0908 14:47:39.367120 1161261 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-1116714/.minikube/cert.pem, removing ...
	I0908 14:47:39.367134 1161261 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-1116714/.minikube/cert.pem
	I0908 14:47:39.367165 1161261 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21508-1116714/.minikube/cert.pem (1123 bytes)
	I0908 14:47:39.367258 1161261 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-1116714/.minikube/key.pem, removing ...
	I0908 14:47:39.367269 1161261 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-1116714/.minikube/key.pem
	I0908 14:47:39.367297 1161261 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21508-1116714/.minikube/key.pem (1675 bytes)
	I0908 14:47:39.367390 1161261 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca-key.pem org=jenkins.pause-120061 san=[127.0.0.1 192.168.61.147 localhost minikube pause-120061]
	I0908 14:47:39.573674 1161261 provision.go:177] copyRemoteCerts
	I0908 14:47:39.573751 1161261 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 14:47:39.573781 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHHostname
	I0908 14:47:39.577127 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:39.577650 1161261 main.go:141] libmachine: (pause-120061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:49:28", ip: ""} in network mk-pause-120061: {Iface:virbr1 ExpiryTime:2025-09-08 15:45:41 +0000 UTC Type:0 Mac:52:54:00:a0:49:28 Iaid: IPaddr:192.168.61.147 Prefix:24 Hostname:pause-120061 Clientid:01:52:54:00:a0:49:28}
	I0908 14:47:39.577687 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined IP address 192.168.61.147 and MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:39.577836 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHPort
	I0908 14:47:39.578123 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHKeyPath
	I0908 14:47:39.578302 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHUsername
	I0908 14:47:39.578501 1161261 sshutil.go:53] new ssh client: &{IP:192.168.61.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/pause-120061/id_rsa Username:docker}
	I0908 14:47:39.678101 1161261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0908 14:47:39.716835 1161261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0908 14:47:39.765726 1161261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0908 14:47:39.813075 1161261 provision.go:87] duration metric: took 454.409899ms to configureAuth
	I0908 14:47:39.813115 1161261 buildroot.go:189] setting minikube options for container-runtime
	I0908 14:47:39.813416 1161261 config.go:182] Loaded profile config "pause-120061": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 14:47:39.813522 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHHostname
	I0908 14:47:39.816873 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:39.817323 1161261 main.go:141] libmachine: (pause-120061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:49:28", ip: ""} in network mk-pause-120061: {Iface:virbr1 ExpiryTime:2025-09-08 15:45:41 +0000 UTC Type:0 Mac:52:54:00:a0:49:28 Iaid: IPaddr:192.168.61.147 Prefix:24 Hostname:pause-120061 Clientid:01:52:54:00:a0:49:28}
	I0908 14:47:39.817356 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined IP address 192.168.61.147 and MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:39.817651 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHPort
	I0908 14:47:39.817919 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHKeyPath
	I0908 14:47:39.818144 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHKeyPath
	I0908 14:47:39.818328 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHUsername
	I0908 14:47:39.818555 1161261 main.go:141] libmachine: Using SSH client type: native
	I0908 14:47:39.818896 1161261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.61.147 22 <nil> <nil>}
	I0908 14:47:39.818913 1161261 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0908 14:47:37.173028 1161065 main.go:141] libmachine: (no-preload-301894) reserved static IP address 192.168.39.135 for domain no-preload-301894
	I0908 14:47:37.173058 1161065 main.go:141] libmachine: (no-preload-301894) waiting for SSH...
	I0908 14:47:37.173117 1161065 main.go:141] libmachine: (no-preload-301894) DBG | Getting to WaitForSSH function...
	I0908 14:47:37.176590 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:37.177193 1161065 main.go:141] libmachine: (no-preload-301894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:d3:58", ip: ""} in network mk-no-preload-301894: {Iface:virbr2 ExpiryTime:2025-09-08 15:47:29 +0000 UTC Type:0 Mac:52:54:00:d6:d3:58 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d6:d3:58}
	I0908 14:47:37.177248 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined IP address 192.168.39.135 and MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:37.177372 1161065 main.go:141] libmachine: (no-preload-301894) DBG | Using SSH client type: external
	I0908 14:47:37.177396 1161065 main.go:141] libmachine: (no-preload-301894) DBG | Using SSH private key: /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/no-preload-301894/id_rsa (-rw-------)
	I0908 14:47:37.177431 1161065 main.go:141] libmachine: (no-preload-301894) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.135 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/no-preload-301894/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0908 14:47:37.177445 1161065 main.go:141] libmachine: (no-preload-301894) DBG | About to run SSH command:
	I0908 14:47:37.177458 1161065 main.go:141] libmachine: (no-preload-301894) DBG | exit 0
	I0908 14:47:37.309120 1161065 main.go:141] libmachine: (no-preload-301894) DBG | SSH cmd err, output: <nil>: 
	I0908 14:47:37.309419 1161065 main.go:141] libmachine: (no-preload-301894) KVM machine creation complete
	I0908 14:47:37.309836 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetConfigRaw
	I0908 14:47:37.310480 1161065 main.go:141] libmachine: (no-preload-301894) Calling .DriverName
	I0908 14:47:37.310692 1161065 main.go:141] libmachine: (no-preload-301894) Calling .DriverName
	I0908 14:47:37.310909 1161065 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0908 14:47:37.310929 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetState
	I0908 14:47:37.312562 1161065 main.go:141] libmachine: Detecting operating system of created instance...
	I0908 14:47:37.312579 1161065 main.go:141] libmachine: Waiting for SSH to be available...
	I0908 14:47:37.312584 1161065 main.go:141] libmachine: Getting to WaitForSSH function...
	I0908 14:47:37.312589 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHHostname
	I0908 14:47:37.315694 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:37.316135 1161065 main.go:141] libmachine: (no-preload-301894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:d3:58", ip: ""} in network mk-no-preload-301894: {Iface:virbr2 ExpiryTime:2025-09-08 15:47:29 +0000 UTC Type:0 Mac:52:54:00:d6:d3:58 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:no-preload-301894 Clientid:01:52:54:00:d6:d3:58}
	I0908 14:47:37.316157 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined IP address 192.168.39.135 and MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:37.316356 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHPort
	I0908 14:47:37.316618 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHKeyPath
	I0908 14:47:37.316798 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHKeyPath
	I0908 14:47:37.316974 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHUsername
	I0908 14:47:37.317197 1161065 main.go:141] libmachine: Using SSH client type: native
	I0908 14:47:37.317455 1161065 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I0908 14:47:37.317468 1161065 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0908 14:47:37.435700 1161065 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 14:47:37.435729 1161065 main.go:141] libmachine: Detecting the provisioner...
	I0908 14:47:37.435738 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHHostname
	I0908 14:47:37.438619 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:37.439018 1161065 main.go:141] libmachine: (no-preload-301894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:d3:58", ip: ""} in network mk-no-preload-301894: {Iface:virbr2 ExpiryTime:2025-09-08 15:47:29 +0000 UTC Type:0 Mac:52:54:00:d6:d3:58 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:no-preload-301894 Clientid:01:52:54:00:d6:d3:58}
	I0908 14:47:37.439050 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined IP address 192.168.39.135 and MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:37.439250 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHPort
	I0908 14:47:37.439458 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHKeyPath
	I0908 14:47:37.439619 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHKeyPath
	I0908 14:47:37.439750 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHUsername
	I0908 14:47:37.439934 1161065 main.go:141] libmachine: Using SSH client type: native
	I0908 14:47:37.440183 1161065 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I0908 14:47:37.440196 1161065 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0908 14:47:37.557514 1161065 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I0908 14:47:37.557585 1161065 main.go:141] libmachine: found compatible host: buildroot
	I0908 14:47:37.557596 1161065 main.go:141] libmachine: Provisioning with buildroot...
	I0908 14:47:37.557608 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetMachineName
	I0908 14:47:37.557921 1161065 buildroot.go:166] provisioning hostname "no-preload-301894"
	I0908 14:47:37.557951 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetMachineName
	I0908 14:47:37.558207 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHHostname
	I0908 14:47:37.561160 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:37.561605 1161065 main.go:141] libmachine: (no-preload-301894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:d3:58", ip: ""} in network mk-no-preload-301894: {Iface:virbr2 ExpiryTime:2025-09-08 15:47:29 +0000 UTC Type:0 Mac:52:54:00:d6:d3:58 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:no-preload-301894 Clientid:01:52:54:00:d6:d3:58}
	I0908 14:47:37.561646 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined IP address 192.168.39.135 and MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:37.561784 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHPort
	I0908 14:47:37.561953 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHKeyPath
	I0908 14:47:37.562111 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHKeyPath
	I0908 14:47:37.562231 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHUsername
	I0908 14:47:37.562386 1161065 main.go:141] libmachine: Using SSH client type: native
	I0908 14:47:37.562602 1161065 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I0908 14:47:37.562615 1161065 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-301894 && echo "no-preload-301894" | sudo tee /etc/hostname
	I0908 14:47:37.701317 1161065 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-301894
	
	I0908 14:47:37.701351 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHHostname
	I0908 14:47:37.705258 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:37.705910 1161065 main.go:141] libmachine: (no-preload-301894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:d3:58", ip: ""} in network mk-no-preload-301894: {Iface:virbr2 ExpiryTime:2025-09-08 15:47:29 +0000 UTC Type:0 Mac:52:54:00:d6:d3:58 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:no-preload-301894 Clientid:01:52:54:00:d6:d3:58}
	I0908 14:47:37.705941 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined IP address 192.168.39.135 and MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:37.706206 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHPort
	I0908 14:47:37.706513 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHKeyPath
	I0908 14:47:37.706732 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHKeyPath
	I0908 14:47:37.706900 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHUsername
	I0908 14:47:37.707110 1161065 main.go:141] libmachine: Using SSH client type: native
	I0908 14:47:37.707366 1161065 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I0908 14:47:37.707387 1161065 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-301894' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-301894/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-301894' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 14:47:37.843925 1161065 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 14:47:37.843967 1161065 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21508-1116714/.minikube CaCertPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21508-1116714/.minikube}
	I0908 14:47:37.844021 1161065 buildroot.go:174] setting up certificates
	I0908 14:47:37.844040 1161065 provision.go:84] configureAuth start
	I0908 14:47:37.844058 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetMachineName
	I0908 14:47:37.844432 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetIP
	I0908 14:47:37.847479 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:37.847900 1161065 main.go:141] libmachine: (no-preload-301894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:d3:58", ip: ""} in network mk-no-preload-301894: {Iface:virbr2 ExpiryTime:2025-09-08 15:47:29 +0000 UTC Type:0 Mac:52:54:00:d6:d3:58 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:no-preload-301894 Clientid:01:52:54:00:d6:d3:58}
	I0908 14:47:37.847937 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined IP address 192.168.39.135 and MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:37.848127 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHHostname
	I0908 14:47:37.850510 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:37.850891 1161065 main.go:141] libmachine: (no-preload-301894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:d3:58", ip: ""} in network mk-no-preload-301894: {Iface:virbr2 ExpiryTime:2025-09-08 15:47:29 +0000 UTC Type:0 Mac:52:54:00:d6:d3:58 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:no-preload-301894 Clientid:01:52:54:00:d6:d3:58}
	I0908 14:47:37.850923 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined IP address 192.168.39.135 and MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:37.851078 1161065 provision.go:143] copyHostCerts
	I0908 14:47:37.851158 1161065 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-1116714/.minikube/ca.pem, removing ...
	I0908 14:47:37.851169 1161065 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-1116714/.minikube/ca.pem
	I0908 14:47:37.851221 1161065 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21508-1116714/.minikube/ca.pem (1082 bytes)
	I0908 14:47:37.851316 1161065 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-1116714/.minikube/cert.pem, removing ...
	I0908 14:47:37.851324 1161065 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-1116714/.minikube/cert.pem
	I0908 14:47:37.851351 1161065 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21508-1116714/.minikube/cert.pem (1123 bytes)
	I0908 14:47:37.851459 1161065 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-1116714/.minikube/key.pem, removing ...
	I0908 14:47:37.851469 1161065 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-1116714/.minikube/key.pem
	I0908 14:47:37.851487 1161065 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21508-1116714/.minikube/key.pem (1675 bytes)
	I0908 14:47:37.851533 1161065 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca-key.pem org=jenkins.no-preload-301894 san=[127.0.0.1 192.168.39.135 localhost minikube no-preload-301894]
	I0908 14:47:38.160932 1161065 provision.go:177] copyRemoteCerts
	I0908 14:47:38.161016 1161065 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 14:47:38.161048 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHHostname
	I0908 14:47:38.164089 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:38.164517 1161065 main.go:141] libmachine: (no-preload-301894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:d3:58", ip: ""} in network mk-no-preload-301894: {Iface:virbr2 ExpiryTime:2025-09-08 15:47:29 +0000 UTC Type:0 Mac:52:54:00:d6:d3:58 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:no-preload-301894 Clientid:01:52:54:00:d6:d3:58}
	I0908 14:47:38.164551 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined IP address 192.168.39.135 and MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:38.164706 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHPort
	I0908 14:47:38.164985 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHKeyPath
	I0908 14:47:38.165168 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHUsername
	I0908 14:47:38.165345 1161065 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/no-preload-301894/id_rsa Username:docker}
	I0908 14:47:38.257981 1161065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0908 14:47:38.295923 1161065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0908 14:47:38.333158 1161065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0908 14:47:38.368889 1161065 provision.go:87] duration metric: took 524.827415ms to configureAuth
	I0908 14:47:38.368930 1161065 buildroot.go:189] setting minikube options for container-runtime
	I0908 14:47:38.369177 1161065 config.go:182] Loaded profile config "no-preload-301894": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 14:47:38.369321 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHHostname
	I0908 14:47:38.372614 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:38.373020 1161065 main.go:141] libmachine: (no-preload-301894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:d3:58", ip: ""} in network mk-no-preload-301894: {Iface:virbr2 ExpiryTime:2025-09-08 15:47:29 +0000 UTC Type:0 Mac:52:54:00:d6:d3:58 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:no-preload-301894 Clientid:01:52:54:00:d6:d3:58}
	I0908 14:47:38.373052 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined IP address 192.168.39.135 and MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:38.373273 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHPort
	I0908 14:47:38.373499 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHKeyPath
	I0908 14:47:38.373686 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHKeyPath
	I0908 14:47:38.374213 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHUsername
	I0908 14:47:38.374555 1161065 main.go:141] libmachine: Using SSH client type: native
	I0908 14:47:38.374842 1161065 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I0908 14:47:38.374868 1161065 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0908 14:47:38.643745 1161065 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0908 14:47:38.643790 1161065 main.go:141] libmachine: Checking connection to Docker...
	I0908 14:47:38.643804 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetURL
	I0908 14:47:38.645360 1161065 main.go:141] libmachine: (no-preload-301894) DBG | using libvirt version 6000000
	I0908 14:47:38.648119 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:38.648477 1161065 main.go:141] libmachine: (no-preload-301894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:d3:58", ip: ""} in network mk-no-preload-301894: {Iface:virbr2 ExpiryTime:2025-09-08 15:47:29 +0000 UTC Type:0 Mac:52:54:00:d6:d3:58 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:no-preload-301894 Clientid:01:52:54:00:d6:d3:58}
	I0908 14:47:38.648511 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined IP address 192.168.39.135 and MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:38.648702 1161065 main.go:141] libmachine: Docker is up and running!
	I0908 14:47:38.648720 1161065 main.go:141] libmachine: Reticulating splines...
	I0908 14:47:38.648728 1161065 client.go:171] duration metric: took 25.388584474s to LocalClient.Create
	I0908 14:47:38.648755 1161065 start.go:167] duration metric: took 25.388655219s to libmachine.API.Create "no-preload-301894"
	I0908 14:47:38.648769 1161065 start.go:293] postStartSetup for "no-preload-301894" (driver="kvm2")
	I0908 14:47:38.648783 1161065 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 14:47:38.648812 1161065 main.go:141] libmachine: (no-preload-301894) Calling .DriverName
	I0908 14:47:38.649087 1161065 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 14:47:38.649117 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHHostname
	I0908 14:47:38.651965 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:38.652312 1161065 main.go:141] libmachine: (no-preload-301894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:d3:58", ip: ""} in network mk-no-preload-301894: {Iface:virbr2 ExpiryTime:2025-09-08 15:47:29 +0000 UTC Type:0 Mac:52:54:00:d6:d3:58 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:no-preload-301894 Clientid:01:52:54:00:d6:d3:58}
	I0908 14:47:38.652336 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined IP address 192.168.39.135 and MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:38.652604 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHPort
	I0908 14:47:38.652899 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHKeyPath
	I0908 14:47:38.653125 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHUsername
	I0908 14:47:38.653274 1161065 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/no-preload-301894/id_rsa Username:docker}
	I0908 14:47:38.745265 1161065 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 14:47:38.751059 1161065 info.go:137] Remote host: Buildroot 2025.02
	I0908 14:47:38.751101 1161065 filesync.go:126] Scanning /home/jenkins/minikube-integration/21508-1116714/.minikube/addons for local assets ...
	I0908 14:47:38.751203 1161065 filesync.go:126] Scanning /home/jenkins/minikube-integration/21508-1116714/.minikube/files for local assets ...
	I0908 14:47:38.751307 1161065 filesync.go:149] local asset: /home/jenkins/minikube-integration/21508-1116714/.minikube/files/etc/ssl/certs/11208752.pem -> 11208752.pem in /etc/ssl/certs
	I0908 14:47:38.751435 1161065 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0908 14:47:38.765567 1161065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/files/etc/ssl/certs/11208752.pem --> /etc/ssl/certs/11208752.pem (1708 bytes)
	I0908 14:47:38.800453 1161065 start.go:296] duration metric: took 151.664041ms for postStartSetup
	I0908 14:47:38.800524 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetConfigRaw
	I0908 14:47:38.801279 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetIP
	I0908 14:47:38.804637 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:38.804988 1161065 main.go:141] libmachine: (no-preload-301894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:d3:58", ip: ""} in network mk-no-preload-301894: {Iface:virbr2 ExpiryTime:2025-09-08 15:47:29 +0000 UTC Type:0 Mac:52:54:00:d6:d3:58 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:no-preload-301894 Clientid:01:52:54:00:d6:d3:58}
	I0908 14:47:38.805020 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined IP address 192.168.39.135 and MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:38.805405 1161065 profile.go:143] Saving config to /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/no-preload-301894/config.json ...
	I0908 14:47:38.805719 1161065 start.go:128] duration metric: took 25.571085913s to createHost
	I0908 14:47:38.805756 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHHostname
	I0908 14:47:38.809193 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:38.809675 1161065 main.go:141] libmachine: (no-preload-301894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:d3:58", ip: ""} in network mk-no-preload-301894: {Iface:virbr2 ExpiryTime:2025-09-08 15:47:29 +0000 UTC Type:0 Mac:52:54:00:d6:d3:58 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:no-preload-301894 Clientid:01:52:54:00:d6:d3:58}
	I0908 14:47:38.809706 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined IP address 192.168.39.135 and MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:38.809911 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHPort
	I0908 14:47:38.810166 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHKeyPath
	I0908 14:47:38.810333 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHKeyPath
	I0908 14:47:38.810546 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHUsername
	I0908 14:47:38.810747 1161065 main.go:141] libmachine: Using SSH client type: native
	I0908 14:47:38.810988 1161065 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I0908 14:47:38.811003 1161065 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0908 14:47:38.933919 1161065 main.go:141] libmachine: SSH cmd err, output: <nil>: 1757342858.916620128
	
	I0908 14:47:38.933949 1161065 fix.go:216] guest clock: 1757342858.916620128
	I0908 14:47:38.933960 1161065 fix.go:229] Guest: 2025-09-08 14:47:38.916620128 +0000 UTC Remote: 2025-09-08 14:47:38.805737661 +0000 UTC m=+56.712336294 (delta=110.882467ms)
	I0908 14:47:38.934034 1161065 fix.go:200] guest clock delta is within tolerance: 110.882467ms
	I0908 14:47:38.934047 1161065 start.go:83] releasing machines lock for "no-preload-301894", held for 25.699591066s
	I0908 14:47:38.934091 1161065 main.go:141] libmachine: (no-preload-301894) Calling .DriverName
	I0908 14:47:38.934420 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetIP
	I0908 14:47:38.937673 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:38.938123 1161065 main.go:141] libmachine: (no-preload-301894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:d3:58", ip: ""} in network mk-no-preload-301894: {Iface:virbr2 ExpiryTime:2025-09-08 15:47:29 +0000 UTC Type:0 Mac:52:54:00:d6:d3:58 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:no-preload-301894 Clientid:01:52:54:00:d6:d3:58}
	I0908 14:47:38.938158 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined IP address 192.168.39.135 and MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:38.938357 1161065 main.go:141] libmachine: (no-preload-301894) Calling .DriverName
	I0908 14:47:38.938943 1161065 main.go:141] libmachine: (no-preload-301894) Calling .DriverName
	I0908 14:47:38.939160 1161065 main.go:141] libmachine: (no-preload-301894) Calling .DriverName
	I0908 14:47:38.939267 1161065 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0908 14:47:38.939359 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHHostname
	I0908 14:47:38.939400 1161065 ssh_runner.go:195] Run: cat /version.json
	I0908 14:47:38.939433 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHHostname
	I0908 14:47:38.942714 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:38.942747 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:38.943190 1161065 main.go:141] libmachine: (no-preload-301894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:d3:58", ip: ""} in network mk-no-preload-301894: {Iface:virbr2 ExpiryTime:2025-09-08 15:47:29 +0000 UTC Type:0 Mac:52:54:00:d6:d3:58 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:no-preload-301894 Clientid:01:52:54:00:d6:d3:58}
	I0908 14:47:38.943250 1161065 main.go:141] libmachine: (no-preload-301894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:d3:58", ip: ""} in network mk-no-preload-301894: {Iface:virbr2 ExpiryTime:2025-09-08 15:47:29 +0000 UTC Type:0 Mac:52:54:00:d6:d3:58 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:no-preload-301894 Clientid:01:52:54:00:d6:d3:58}
	I0908 14:47:38.943274 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined IP address 192.168.39.135 and MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:38.943298 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined IP address 192.168.39.135 and MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:38.943680 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHPort
	I0908 14:47:38.943699 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHPort
	I0908 14:47:38.943921 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHKeyPath
	I0908 14:47:38.943922 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHKeyPath
	I0908 14:47:38.944143 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHUsername
	I0908 14:47:38.944148 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetSSHUsername
	I0908 14:47:38.944354 1161065 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/no-preload-301894/id_rsa Username:docker}
	I0908 14:47:38.944358 1161065 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/no-preload-301894/id_rsa Username:docker}
	I0908 14:47:39.035120 1161065 ssh_runner.go:195] Run: systemctl --version
	I0908 14:47:39.062137 1161065 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0908 14:47:39.236708 1161065 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0908 14:47:39.246763 1161065 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0908 14:47:39.246858 1161065 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 14:47:39.270638 1161065 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0908 14:47:39.270676 1161065 start.go:495] detecting cgroup driver to use...
	I0908 14:47:39.270761 1161065 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0908 14:47:39.297655 1161065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 14:47:39.317784 1161065 docker.go:218] disabling cri-docker service (if available) ...
	I0908 14:47:39.317875 1161065 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0908 14:47:39.335086 1161065 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0908 14:47:39.354042 1161065 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0908 14:47:39.548548 1161065 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0908 14:47:39.712825 1161065 docker.go:234] disabling docker service ...
	I0908 14:47:39.712903 1161065 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0908 14:47:39.734928 1161065 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0908 14:47:39.755360 1161065 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0908 14:47:39.989247 1161065 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0908 14:47:40.143124 1161065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 14:47:40.161711 1161065 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 14:47:40.188459 1161065 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0908 14:47:40.188551 1161065 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 14:47:40.204138 1161065 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0908 14:47:40.204229 1161065 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 14:47:40.219098 1161065 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 14:47:40.233463 1161065 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 14:47:40.248559 1161065 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 14:47:40.264441 1161065 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 14:47:40.279123 1161065 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 14:47:40.305163 1161065 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 14:47:40.319616 1161065 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 14:47:40.332770 1161065 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0908 14:47:40.332859 1161065 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0908 14:47:40.355858 1161065 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 14:47:40.369794 1161065 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 14:47:40.520912 1161065 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0908 14:47:40.639497 1161065 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0908 14:47:40.639577 1161065 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0908 14:47:40.645350 1161065 start.go:563] Will wait 60s for crictl version
	I0908 14:47:40.645420 1161065 ssh_runner.go:195] Run: which crictl
	I0908 14:47:40.650328 1161065 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 14:47:40.697177 1161065 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0908 14:47:40.697287 1161065 ssh_runner.go:195] Run: crio --version
	I0908 14:47:40.730232 1161065 ssh_runner.go:195] Run: crio --version
	I0908 14:47:40.764916 1161065 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	I0908 14:47:40.766192 1161065 main.go:141] libmachine: (no-preload-301894) Calling .GetIP
	I0908 14:47:40.769070 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:40.769584 1161065 main.go:141] libmachine: (no-preload-301894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:d3:58", ip: ""} in network mk-no-preload-301894: {Iface:virbr2 ExpiryTime:2025-09-08 15:47:29 +0000 UTC Type:0 Mac:52:54:00:d6:d3:58 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:no-preload-301894 Clientid:01:52:54:00:d6:d3:58}
	I0908 14:47:40.769611 1161065 main.go:141] libmachine: (no-preload-301894) DBG | domain no-preload-301894 has defined IP address 192.168.39.135 and MAC address 52:54:00:d6:d3:58 in network mk-no-preload-301894
	I0908 14:47:40.769912 1161065 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0908 14:47:40.777603 1161065 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 14:47:40.798773 1161065 kubeadm.go:875] updating cluster {Name:no-preload-301894 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.34.0 ClusterName:no-preload-301894 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.135 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0908 14:47:40.798946 1161065 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 14:47:40.798999 1161065 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 14:47:40.842242 1161065 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.0". assuming images are not preloaded.
	I0908 14:47:40.842279 1161065 cache_images.go:89] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.0 registry.k8s.io/kube-controller-manager:v1.34.0 registry.k8s.io/kube-scheduler:v1.34.0 registry.k8s.io/kube-proxy:v1.34.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0908 14:47:40.842343 1161065 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0908 14:47:40.842368 1161065 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.0
	I0908 14:47:40.842397 1161065 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I0908 14:47:40.842381 1161065 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I0908 14:47:40.842427 1161065 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.0
	I0908 14:47:40.842469 1161065 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I0908 14:47:40.842477 1161065 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.0
	I0908 14:47:40.842407 1161065 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.0
	I0908 14:47:40.843986 1161065 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.0
	I0908 14:47:40.843994 1161065 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I0908 14:47:40.844049 1161065 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I0908 14:47:40.844112 1161065 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I0908 14:47:40.843986 1161065 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.0
	I0908 14:47:40.844144 1161065 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0908 14:47:40.844195 1161065 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.0
	I0908 14:47:40.844204 1161065 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.0
	I0908 14:47:40.976211 1161065 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.0
	I0908 14:47:40.982227 1161065 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.0
	I0908 14:47:40.988896 1161065 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I0908 14:47:40.989992 1161065 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.0
	I0908 14:47:40.993290 1161065 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I0908 14:47:41.007920 1161065 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I0908 14:47:41.018316 1161065 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.0
	I0908 14:47:41.097150 1161065 cache_images.go:117] "registry.k8s.io/kube-apiserver:v1.34.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.0" does not exist at hash "90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90" in container runtime
	I0908 14:47:41.097220 1161065 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.0
	I0908 14:47:41.097294 1161065 ssh_runner.go:195] Run: which crictl
	I0908 14:47:41.153300 1161065 cache_images.go:117] "registry.k8s.io/kube-controller-manager:v1.34.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.0" does not exist at hash "a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634" in container runtime
	I0908 14:47:41.153361 1161065 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.0
	I0908 14:47:41.153423 1161065 ssh_runner.go:195] Run: which crictl
	I0908 14:47:41.200415 1161065 cache_images.go:117] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115" in container runtime
	I0908 14:47:41.200517 1161065 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I0908 14:47:41.200547 1161065 cache_images.go:117] "registry.k8s.io/kube-scheduler:v1.34.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.0" does not exist at hash "46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc" in container runtime
	I0908 14:47:41.200586 1161065 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.0
	I0908 14:47:41.200600 1161065 ssh_runner.go:195] Run: which crictl
	I0908 14:47:41.200640 1161065 ssh_runner.go:195] Run: which crictl
	I0908 14:47:41.200648 1161065 cache_images.go:117] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I0908 14:47:41.200689 1161065 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I0908 14:47:41.200735 1161065 ssh_runner.go:195] Run: which crictl
	I0908 14:47:41.210714 1161065 cache_images.go:117] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I0908 14:47:41.210784 1161065 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I0908 14:47:41.210841 1161065 ssh_runner.go:195] Run: which crictl
	I0908 14:47:41.215898 1161065 cache_images.go:117] "registry.k8s.io/kube-proxy:v1.34.0" needs transfer: "registry.k8s.io/kube-proxy:v1.34.0" does not exist at hash "df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce" in container runtime
	I0908 14:47:41.215928 1161065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.0
	I0908 14:47:41.215962 1161065 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.0
	I0908 14:47:41.215976 1161065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.0
	I0908 14:47:41.216015 1161065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I0908 14:47:41.216035 1161065 ssh_runner.go:195] Run: which crictl
	I0908 14:47:41.297696 1161065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.0
	I0908 14:47:41.297695 1161065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I0908 14:47:41.297793 1161065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I0908 14:47:41.297921 1161065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.0
	I0908 14:47:41.297946 1161065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I0908 14:47:41.298011 1161065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.0
	I0908 14:47:41.298054 1161065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.0
	I0908 14:47:41.425096 1161065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I0908 14:47:41.460502 1161065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I0908 14:47:41.489178 1161065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.0
	I0908 14:47:41.489245 1161065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.0
	I0908 14:47:41.489303 1161065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.0
	I0908 14:47:41.509154 1161065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I0908 14:47:41.509183 1161065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.0
	I0908 14:47:41.557564 1161065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I0908 14:47:41.587872 1161065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I0908 14:47:41.703578 1161065 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.0
	I0908 14:47:41.703721 1161065 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.0
	I0908 14:47:41.707362 1161065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.0
	I0908 14:47:41.707402 1161065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.0
	I0908 14:47:41.707450 1161065 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.0
	I0908 14:47:41.707531 1161065 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.0
	I0908 14:47:41.718915 1161065 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I0908 14:47:41.719031 1161065 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I0908 14:47:41.759895 1161065 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I0908 14:47:41.759975 1161065 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I0908 14:47:41.760008 1161065 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.0': No such file or directory
	I0908 14:47:41.760034 1161065 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I0908 14:47:41.760044 1161065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.0 --> /var/lib/minikube/images/kube-apiserver_v1.34.0 (27077120 bytes)
	I0908 14:47:41.760085 1161065 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I0908 14:47:41.843897 1161065 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0908 14:47:41.861712 1161065 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.0
	I0908 14:47:41.861748 1161065 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I0908 14:47:41.861787 1161065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I0908 14:47:41.861713 1161065 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.0
	I0908 14:47:41.861848 1161065 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.0
	I0908 14:47:41.861856 1161065 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I0908 14:47:41.861719 1161065 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.0': No such file or directory
	I0908 14:47:41.861881 1161065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.0 --> /var/lib/minikube/images/kube-controller-manager_v1.34.0 (22830592 bytes)
	I0908 14:47:41.861875 1161065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (74320896 bytes)
	I0908 14:47:41.861820 1161065 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I0908 14:47:41.861905 1161065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I0908 14:47:41.861932 1161065 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.0
	I0908 14:47:41.965136 1161065 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.0': No such file or directory
	I0908 14:47:41.965163 1161065 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.0': No such file or directory
	I0908 14:47:41.965190 1161065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.0 --> /var/lib/minikube/images/kube-proxy_v1.34.0 (25966080 bytes)
	I0908 14:47:41.965192 1161065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.0 --> /var/lib/minikube/images/kube-scheduler_v1.34.0 (17396736 bytes)
	I0908 14:47:41.965722 1161065 cache_images.go:117] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0908 14:47:41.965770 1161065 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0908 14:47:41.965831 1161065 ssh_runner.go:195] Run: which crictl
	I0908 14:47:42.020378 1161065 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I0908 14:47:42.020483 1161065 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I0908 14:47:42.078919 1161065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0908 14:47:38.720866 1160669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 14:47:39.221095 1160669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 14:47:39.720987 1160669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 14:47:40.221657 1160669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 14:47:40.721314 1160669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 14:47:41.220766 1160669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 14:47:41.721203 1160669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 14:47:42.221617 1160669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 14:47:42.720952 1160669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 14:47:43.221404 1160669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 14:47:43.721317 1160669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 14:47:44.220963 1160669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 14:47:44.720830 1160669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 14:47:45.220623 1160669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 14:47:45.347813 1160669 kubeadm.go:1105] duration metric: took 11.893117029s to wait for elevateKubeSystemPrivileges
	I0908 14:47:45.347887 1160669 kubeadm.go:394] duration metric: took 24.164696368s to StartCluster
	I0908 14:47:45.347916 1160669 settings.go:142] acquiring lock: {Name:mkc208e3a70732deaf67c191918f201f73e82457 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:47:45.348058 1160669 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21508-1116714/kubeconfig
	I0908 14:47:45.349168 1160669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-1116714/kubeconfig: {Name:mk93422b0007d912fa8f198f71d62d01a418d566 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:47:45.349548 1160669 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0908 14:47:45.349550 1160669 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.48 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0908 14:47:45.349640 1160669 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0908 14:47:45.349795 1160669 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-454279"
	I0908 14:47:45.349805 1160669 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-454279"
	I0908 14:47:45.349820 1160669 config.go:182] Loaded profile config "old-k8s-version-454279": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I0908 14:47:45.349826 1160669 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-454279"
	I0908 14:47:45.349836 1160669 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-454279"
	I0908 14:47:45.349870 1160669 host.go:66] Checking if "old-k8s-version-454279" exists ...
	I0908 14:47:45.350341 1160669 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 14:47:45.350382 1160669 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 14:47:45.350391 1160669 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 14:47:45.350418 1160669 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 14:47:45.351120 1160669 out.go:179] * Verifying Kubernetes components...
	I0908 14:47:45.352793 1160669 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 14:47:45.374484 1160669 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39291
	I0908 14:47:45.374717 1160669 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36999
	I0908 14:47:45.375337 1160669 main.go:141] libmachine: () Calling .GetVersion
	I0908 14:47:45.375461 1160669 main.go:141] libmachine: () Calling .GetVersion
	I0908 14:47:45.375918 1160669 main.go:141] libmachine: Using API Version  1
	I0908 14:47:45.375942 1160669 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 14:47:45.376026 1160669 main.go:141] libmachine: Using API Version  1
	I0908 14:47:45.376039 1160669 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 14:47:45.376470 1160669 main.go:141] libmachine: () Calling .GetMachineName
	I0908 14:47:45.376518 1160669 main.go:141] libmachine: () Calling .GetMachineName
	I0908 14:47:45.376708 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetState
	I0908 14:47:45.377155 1160669 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 14:47:45.377198 1160669 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 14:47:45.380946 1160669 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-454279"
	I0908 14:47:45.381009 1160669 host.go:66] Checking if "old-k8s-version-454279" exists ...
	I0908 14:47:45.381428 1160669 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 14:47:45.381483 1160669 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 14:47:45.403210 1160669 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37571
	I0908 14:47:45.403809 1160669 main.go:141] libmachine: () Calling .GetVersion
	I0908 14:47:45.404531 1160669 main.go:141] libmachine: Using API Version  1
	I0908 14:47:45.404563 1160669 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 14:47:45.404875 1160669 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38017
	I0908 14:47:45.405094 1160669 main.go:141] libmachine: () Calling .GetMachineName
	I0908 14:47:45.405298 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetState
	I0908 14:47:45.405577 1160669 main.go:141] libmachine: () Calling .GetVersion
	I0908 14:47:45.406133 1160669 main.go:141] libmachine: Using API Version  1
	I0908 14:47:45.406151 1160669 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 14:47:45.406508 1160669 main.go:141] libmachine: () Calling .GetMachineName
	I0908 14:47:45.406979 1160669 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 14:47:45.407030 1160669 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 14:47:45.407322 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .DriverName
	I0908 14:47:45.410187 1160669 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0908 14:47:45.846353 1161554 start.go:364] duration metric: took 36.603280003s to acquireMachinesLock for "embed-certs-372004"
	I0908 14:47:45.846462 1161554 start.go:93] Provisioning new machine with config: &{Name:embed-certs-372004 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.34.0 ClusterName:embed-certs-372004 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0908 14:47:45.846562 1161554 start.go:125] createHost starting for "" (driver="kvm2")
	I0908 14:47:42.579469 1161065 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I0908 14:47:42.579551 1161065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0908 14:47:42.692219 1161065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0908 14:47:42.778256 1161065 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.0
	I0908 14:47:42.778368 1161065 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.0
	I0908 14:47:42.891505 1161065 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0908 14:47:42.891685 1161065 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0908 14:47:45.512519 1161065 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.0: (2.734115473s)
	I0908 14:47:45.512565 1161065 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.0 from cache
	I0908 14:47:45.512592 1161065 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.0
	I0908 14:47:45.512649 1161065 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.0
	I0908 14:47:45.512649 1161065 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.620929371s)
	I0908 14:47:45.512697 1161065 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0908 14:47:45.512732 1161065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0908 14:47:45.412567 1160669 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 14:47:45.412601 1160669 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0908 14:47:45.412635 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHHostname
	I0908 14:47:45.417707 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHPort
	I0908 14:47:45.417719 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:45.417757 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:56:ae", ip: ""} in network mk-old-k8s-version-454279: {Iface:virbr3 ExpiryTime:2025-09-08 15:47:00 +0000 UTC Type:0 Mac:52:54:00:78:56:ae Iaid: IPaddr:192.168.50.48 Prefix:24 Hostname:old-k8s-version-454279 Clientid:01:52:54:00:78:56:ae}
	I0908 14:47:45.417785 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined IP address 192.168.50.48 and MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:45.418320 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHKeyPath
	I0908 14:47:45.418906 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHUsername
	I0908 14:47:45.419189 1160669 sshutil.go:53] new ssh client: &{IP:192.168.50.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/old-k8s-version-454279/id_rsa Username:docker}
	I0908 14:47:45.428832 1160669 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35541
	I0908 14:47:45.430114 1160669 main.go:141] libmachine: () Calling .GetVersion
	I0908 14:47:45.431127 1160669 main.go:141] libmachine: Using API Version  1
	I0908 14:47:45.431156 1160669 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 14:47:45.432509 1160669 main.go:141] libmachine: () Calling .GetMachineName
	I0908 14:47:45.432730 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetState
	I0908 14:47:45.435061 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .DriverName
	I0908 14:47:45.435429 1160669 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0908 14:47:45.435452 1160669 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0908 14:47:45.435479 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHHostname
	I0908 14:47:45.440341 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:45.440853 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:56:ae", ip: ""} in network mk-old-k8s-version-454279: {Iface:virbr3 ExpiryTime:2025-09-08 15:47:00 +0000 UTC Type:0 Mac:52:54:00:78:56:ae Iaid: IPaddr:192.168.50.48 Prefix:24 Hostname:old-k8s-version-454279 Clientid:01:52:54:00:78:56:ae}
	I0908 14:47:45.440895 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | domain old-k8s-version-454279 has defined IP address 192.168.50.48 and MAC address 52:54:00:78:56:ae in network mk-old-k8s-version-454279
	I0908 14:47:45.441132 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHPort
	I0908 14:47:45.441409 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHKeyPath
	I0908 14:47:45.441584 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .GetSSHUsername
	I0908 14:47:45.441763 1160669 sshutil.go:53] new ssh client: &{IP:192.168.50.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/old-k8s-version-454279/id_rsa Username:docker}
	I0908 14:47:45.742923 1160669 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0908 14:47:45.789308 1160669 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 14:47:46.056326 1160669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 14:47:46.132154 1160669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0908 14:47:48.417231 1160669 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.674257678s)
	I0908 14:47:48.417274 1160669 start.go:976] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0908 14:47:48.418751 1160669 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.629391053s)
	I0908 14:47:48.419470 1160669 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-454279" to be "Ready" ...
	I0908 14:47:48.441291 1160669 node_ready.go:49] node "old-k8s-version-454279" is "Ready"
	I0908 14:47:48.441355 1160669 node_ready.go:38] duration metric: took 21.855187ms for node "old-k8s-version-454279" to be "Ready" ...
	I0908 14:47:48.441379 1160669 api_server.go:52] waiting for apiserver process to appear ...
	I0908 14:47:48.441493 1160669 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 14:47:48.609162 1160669 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.552776998s)
	I0908 14:47:48.609230 1160669 main.go:141] libmachine: Making call to close driver server
	I0908 14:47:48.609244 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .Close
	I0908 14:47:48.609272 1160669 api_server.go:72] duration metric: took 3.25968722s to wait for apiserver process to appear ...
	I0908 14:47:48.609284 1160669 api_server.go:88] waiting for apiserver healthz status ...
	I0908 14:47:48.609321 1160669 api_server.go:253] Checking apiserver healthz at https://192.168.50.48:8443/healthz ...
	I0908 14:47:48.609632 1160669 main.go:141] libmachine: Successfully made call to close driver server
	I0908 14:47:48.609659 1160669 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 14:47:48.609672 1160669 main.go:141] libmachine: Making call to close driver server
	I0908 14:47:48.609699 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .Close
	I0908 14:47:48.609795 1160669 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.477033697s)
	I0908 14:47:48.610022 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | Closing plugin on server side
	I0908 14:47:48.610109 1160669 main.go:141] libmachine: Successfully made call to close driver server
	I0908 14:47:48.610141 1160669 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 14:47:48.610338 1160669 main.go:141] libmachine: Making call to close driver server
	I0908 14:47:48.610402 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .Close
	I0908 14:47:48.610689 1160669 main.go:141] libmachine: Successfully made call to close driver server
	I0908 14:47:48.610709 1160669 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 14:47:48.610718 1160669 main.go:141] libmachine: Making call to close driver server
	I0908 14:47:48.610725 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .Close
	I0908 14:47:48.611820 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | Closing plugin on server side
	I0908 14:47:48.611828 1160669 main.go:141] libmachine: Successfully made call to close driver server
	I0908 14:47:48.611841 1160669 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 14:47:48.623333 1160669 api_server.go:279] https://192.168.50.48:8443/healthz returned 200:
	ok
	I0908 14:47:48.625613 1160669 api_server.go:141] control plane version: v1.28.0
	I0908 14:47:48.625675 1160669 api_server.go:131] duration metric: took 16.381913ms to wait for apiserver health ...
	I0908 14:47:48.625689 1160669 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 14:47:48.651627 1160669 system_pods.go:59] 8 kube-system pods found
	I0908 14:47:48.651722 1160669 system_pods.go:61] "coredns-5dd5756b68-bzzvj" [690695ec-8039-4269-894c-bb8ef49aef3f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:47:48.651748 1160669 system_pods.go:61] "coredns-5dd5756b68-wnv5p" [d97c50cc-9633-4230-b501-5cb90fc1fed6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:47:48.651756 1160669 system_pods.go:61] "etcd-old-k8s-version-454279" [ea25c27d-e993-4978-89bf-8699bd763b8e] Running
	I0908 14:47:48.651763 1160669 system_pods.go:61] "kube-apiserver-old-k8s-version-454279" [93e45f85-1ddb-4873-893b-a0008c4e9e47] Running
	I0908 14:47:48.651771 1160669 system_pods.go:61] "kube-controller-manager-old-k8s-version-454279" [795f0269-31ee-492d-93d4-d58e6378b2a0] Running
	I0908 14:47:48.651779 1160669 system_pods.go:61] "kube-proxy-rjdpq" [4aa93314-791f-4a28-8457-c8c7348a2167] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0908 14:47:48.651785 1160669 system_pods.go:61] "kube-scheduler-old-k8s-version-454279" [451a54a6-51f0-42c8-bde1-99e63b386b9e] Running
	I0908 14:47:48.651790 1160669 system_pods.go:61] "storage-provisioner" [1d11738d-c363-45ab-b2fb-7973140a1b2e] Pending
	I0908 14:47:48.651800 1160669 system_pods.go:74] duration metric: took 26.101765ms to wait for pod list to return data ...
	I0908 14:47:48.651813 1160669 default_sa.go:34] waiting for default service account to be created ...
	I0908 14:47:48.655569 1160669 main.go:141] libmachine: Making call to close driver server
	I0908 14:47:48.655601 1160669 main.go:141] libmachine: (old-k8s-version-454279) Calling .Close
	I0908 14:47:48.656109 1160669 main.go:141] libmachine: (old-k8s-version-454279) DBG | Closing plugin on server side
	I0908 14:47:48.656177 1160669 main.go:141] libmachine: Successfully made call to close driver server
	I0908 14:47:48.656189 1160669 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 14:47:48.657580 1160669 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0908 14:47:45.848020 1161554 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0908 14:47:45.848269 1161554 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 14:47:45.848341 1161554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 14:47:45.871830 1161554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33489
	I0908 14:47:45.872436 1161554 main.go:141] libmachine: () Calling .GetVersion
	I0908 14:47:45.873082 1161554 main.go:141] libmachine: Using API Version  1
	I0908 14:47:45.873108 1161554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 14:47:45.873586 1161554 main.go:141] libmachine: () Calling .GetMachineName
	I0908 14:47:45.873785 1161554 main.go:141] libmachine: (embed-certs-372004) Calling .GetMachineName
	I0908 14:47:45.873955 1161554 main.go:141] libmachine: (embed-certs-372004) Calling .DriverName
	I0908 14:47:45.874172 1161554 start.go:159] libmachine.API.Create for "embed-certs-372004" (driver="kvm2")
	I0908 14:47:45.874207 1161554 client.go:168] LocalClient.Create starting
	I0908 14:47:45.874250 1161554 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca.pem
	I0908 14:47:45.874290 1161554 main.go:141] libmachine: Decoding PEM data...
	I0908 14:47:45.874318 1161554 main.go:141] libmachine: Parsing certificate...
	I0908 14:47:45.874393 1161554 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/cert.pem
	I0908 14:47:45.874431 1161554 main.go:141] libmachine: Decoding PEM data...
	I0908 14:47:45.874447 1161554 main.go:141] libmachine: Parsing certificate...
	I0908 14:47:45.874477 1161554 main.go:141] libmachine: Running pre-create checks...
	I0908 14:47:45.874487 1161554 main.go:141] libmachine: (embed-certs-372004) Calling .PreCreateCheck
	I0908 14:47:45.874937 1161554 main.go:141] libmachine: (embed-certs-372004) Calling .GetConfigRaw
	I0908 14:47:45.875461 1161554 main.go:141] libmachine: Creating machine...
	I0908 14:47:45.875478 1161554 main.go:141] libmachine: (embed-certs-372004) Calling .Create
	I0908 14:47:45.875635 1161554 main.go:141] libmachine: (embed-certs-372004) creating KVM machine...
	I0908 14:47:45.875682 1161554 main.go:141] libmachine: (embed-certs-372004) creating network...
	I0908 14:47:45.877282 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | found existing default KVM network
	I0908 14:47:45.878669 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | I0908 14:47:45.878495 1161911 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:c1:a4:09} reservation:<nil>}
	I0908 14:47:45.881284 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | I0908 14:47:45.879355 1161911 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:88:97:37} reservation:<nil>}
	I0908 14:47:45.881324 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | I0908 14:47:45.880084 1161911 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:8f:0a:78} reservation:<nil>}
	I0908 14:47:45.881348 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | I0908 14:47:45.881085 1161911 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002ceac0}
	I0908 14:47:45.881368 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | created network xml: 
	I0908 14:47:45.881375 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | <network>
	I0908 14:47:45.881380 1161554 main.go:141] libmachine: (embed-certs-372004) DBG |   <name>mk-embed-certs-372004</name>
	I0908 14:47:45.881385 1161554 main.go:141] libmachine: (embed-certs-372004) DBG |   <dns enable='no'/>
	I0908 14:47:45.881389 1161554 main.go:141] libmachine: (embed-certs-372004) DBG |   
	I0908 14:47:45.881396 1161554 main.go:141] libmachine: (embed-certs-372004) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0908 14:47:45.881400 1161554 main.go:141] libmachine: (embed-certs-372004) DBG |     <dhcp>
	I0908 14:47:45.881406 1161554 main.go:141] libmachine: (embed-certs-372004) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0908 14:47:45.881410 1161554 main.go:141] libmachine: (embed-certs-372004) DBG |     </dhcp>
	I0908 14:47:45.881414 1161554 main.go:141] libmachine: (embed-certs-372004) DBG |   </ip>
	I0908 14:47:45.881418 1161554 main.go:141] libmachine: (embed-certs-372004) DBG |   
	I0908 14:47:45.881422 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | </network>
	I0908 14:47:45.881426 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | 
	I0908 14:47:45.890786 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | trying to create private KVM network mk-embed-certs-372004 192.168.72.0/24...
	I0908 14:47:46.003232 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | private KVM network mk-embed-certs-372004 192.168.72.0/24 created
	I0908 14:47:46.003502 1161554 main.go:141] libmachine: (embed-certs-372004) setting up store path in /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/embed-certs-372004 ...
	I0908 14:47:46.003538 1161554 main.go:141] libmachine: (embed-certs-372004) building disk image from file:///home/jenkins/minikube-integration/21508-1116714/.minikube/cache/iso/amd64/minikube-v1.36.0-1756980912-21488-amd64.iso
	I0908 14:47:46.003561 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | I0908 14:47:46.003482 1161911 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21508-1116714/.minikube
	I0908 14:47:46.003723 1161554 main.go:141] libmachine: (embed-certs-372004) Downloading /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21508-1116714/.minikube/cache/iso/amd64/minikube-v1.36.0-1756980912-21488-amd64.iso...
	I0908 14:47:46.335755 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | I0908 14:47:46.335566 1161911 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/embed-certs-372004/id_rsa...
	I0908 14:47:46.601582 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | I0908 14:47:46.601395 1161911 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/embed-certs-372004/embed-certs-372004.rawdisk...
	I0908 14:47:46.601613 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | Writing magic tar header
	I0908 14:47:46.601631 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | Writing SSH key tar header
	I0908 14:47:46.601654 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | I0908 14:47:46.601587 1161911 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/embed-certs-372004 ...
	I0908 14:47:46.601773 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/embed-certs-372004
	I0908 14:47:46.601935 1161554 main.go:141] libmachine: (embed-certs-372004) setting executable bit set on /home/jenkins/minikube-integration/21508-1116714/.minikube/machines/embed-certs-372004 (perms=drwx------)
	I0908 14:47:46.602028 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21508-1116714/.minikube/machines
	I0908 14:47:46.602055 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21508-1116714/.minikube
	I0908 14:47:46.602069 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21508-1116714
	I0908 14:47:46.602079 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0908 14:47:46.602093 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | checking permissions on dir: /home/jenkins
	I0908 14:47:46.602101 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | checking permissions on dir: /home
	I0908 14:47:46.602113 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | skipping /home - not owner
	I0908 14:47:46.602130 1161554 main.go:141] libmachine: (embed-certs-372004) setting executable bit set on /home/jenkins/minikube-integration/21508-1116714/.minikube/machines (perms=drwxr-xr-x)
	I0908 14:47:46.602140 1161554 main.go:141] libmachine: (embed-certs-372004) setting executable bit set on /home/jenkins/minikube-integration/21508-1116714/.minikube (perms=drwxr-xr-x)
	I0908 14:47:46.602152 1161554 main.go:141] libmachine: (embed-certs-372004) setting executable bit set on /home/jenkins/minikube-integration/21508-1116714 (perms=drwxrwxr-x)
	I0908 14:47:46.602161 1161554 main.go:141] libmachine: (embed-certs-372004) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0908 14:47:46.602172 1161554 main.go:141] libmachine: (embed-certs-372004) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0908 14:47:46.602180 1161554 main.go:141] libmachine: (embed-certs-372004) creating domain...
	I0908 14:47:46.603813 1161554 main.go:141] libmachine: (embed-certs-372004) define libvirt domain using xml: 
	I0908 14:47:46.603835 1161554 main.go:141] libmachine: (embed-certs-372004) <domain type='kvm'>
	I0908 14:47:46.603843 1161554 main.go:141] libmachine: (embed-certs-372004)   <name>embed-certs-372004</name>
	I0908 14:47:46.603849 1161554 main.go:141] libmachine: (embed-certs-372004)   <memory unit='MiB'>3072</memory>
	I0908 14:47:46.603868 1161554 main.go:141] libmachine: (embed-certs-372004)   <vcpu>2</vcpu>
	I0908 14:47:46.603878 1161554 main.go:141] libmachine: (embed-certs-372004)   <features>
	I0908 14:47:46.603887 1161554 main.go:141] libmachine: (embed-certs-372004)     <acpi/>
	I0908 14:47:46.603893 1161554 main.go:141] libmachine: (embed-certs-372004)     <apic/>
	I0908 14:47:46.603900 1161554 main.go:141] libmachine: (embed-certs-372004)     <pae/>
	I0908 14:47:46.603906 1161554 main.go:141] libmachine: (embed-certs-372004)     
	I0908 14:47:46.603912 1161554 main.go:141] libmachine: (embed-certs-372004)   </features>
	I0908 14:47:46.603919 1161554 main.go:141] libmachine: (embed-certs-372004)   <cpu mode='host-passthrough'>
	I0908 14:47:46.603926 1161554 main.go:141] libmachine: (embed-certs-372004)   
	I0908 14:47:46.603932 1161554 main.go:141] libmachine: (embed-certs-372004)   </cpu>
	I0908 14:47:46.603941 1161554 main.go:141] libmachine: (embed-certs-372004)   <os>
	I0908 14:47:46.603947 1161554 main.go:141] libmachine: (embed-certs-372004)     <type>hvm</type>
	I0908 14:47:46.603955 1161554 main.go:141] libmachine: (embed-certs-372004)     <boot dev='cdrom'/>
	I0908 14:47:46.603963 1161554 main.go:141] libmachine: (embed-certs-372004)     <boot dev='hd'/>
	I0908 14:47:46.603972 1161554 main.go:141] libmachine: (embed-certs-372004)     <bootmenu enable='no'/>
	I0908 14:47:46.603978 1161554 main.go:141] libmachine: (embed-certs-372004)   </os>
	I0908 14:47:46.603987 1161554 main.go:141] libmachine: (embed-certs-372004)   <devices>
	I0908 14:47:46.603995 1161554 main.go:141] libmachine: (embed-certs-372004)     <disk type='file' device='cdrom'>
	I0908 14:47:46.604013 1161554 main.go:141] libmachine: (embed-certs-372004)       <source file='/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/embed-certs-372004/boot2docker.iso'/>
	I0908 14:47:46.604022 1161554 main.go:141] libmachine: (embed-certs-372004)       <target dev='hdc' bus='scsi'/>
	I0908 14:47:46.604029 1161554 main.go:141] libmachine: (embed-certs-372004)       <readonly/>
	I0908 14:47:46.604034 1161554 main.go:141] libmachine: (embed-certs-372004)     </disk>
	I0908 14:47:46.604042 1161554 main.go:141] libmachine: (embed-certs-372004)     <disk type='file' device='disk'>
	I0908 14:47:46.604050 1161554 main.go:141] libmachine: (embed-certs-372004)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0908 14:47:46.604065 1161554 main.go:141] libmachine: (embed-certs-372004)       <source file='/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/embed-certs-372004/embed-certs-372004.rawdisk'/>
	I0908 14:47:46.604073 1161554 main.go:141] libmachine: (embed-certs-372004)       <target dev='hda' bus='virtio'/>
	I0908 14:47:46.604082 1161554 main.go:141] libmachine: (embed-certs-372004)     </disk>
	I0908 14:47:46.604116 1161554 main.go:141] libmachine: (embed-certs-372004)     <interface type='network'>
	I0908 14:47:46.604143 1161554 main.go:141] libmachine: (embed-certs-372004)       <source network='mk-embed-certs-372004'/>
	I0908 14:47:46.604151 1161554 main.go:141] libmachine: (embed-certs-372004)       <model type='virtio'/>
	I0908 14:47:46.604159 1161554 main.go:141] libmachine: (embed-certs-372004)     </interface>
	I0908 14:47:46.604166 1161554 main.go:141] libmachine: (embed-certs-372004)     <interface type='network'>
	I0908 14:47:46.604176 1161554 main.go:141] libmachine: (embed-certs-372004)       <source network='default'/>
	I0908 14:47:46.604183 1161554 main.go:141] libmachine: (embed-certs-372004)       <model type='virtio'/>
	I0908 14:47:46.604191 1161554 main.go:141] libmachine: (embed-certs-372004)     </interface>
	I0908 14:47:46.604202 1161554 main.go:141] libmachine: (embed-certs-372004)     <serial type='pty'>
	I0908 14:47:46.604211 1161554 main.go:141] libmachine: (embed-certs-372004)       <target port='0'/>
	I0908 14:47:46.604218 1161554 main.go:141] libmachine: (embed-certs-372004)     </serial>
	I0908 14:47:46.604227 1161554 main.go:141] libmachine: (embed-certs-372004)     <console type='pty'>
	I0908 14:47:46.604234 1161554 main.go:141] libmachine: (embed-certs-372004)       <target type='serial' port='0'/>
	I0908 14:47:46.604243 1161554 main.go:141] libmachine: (embed-certs-372004)     </console>
	I0908 14:47:46.604251 1161554 main.go:141] libmachine: (embed-certs-372004)     <rng model='virtio'>
	I0908 14:47:46.604260 1161554 main.go:141] libmachine: (embed-certs-372004)       <backend model='random'>/dev/random</backend>
	I0908 14:47:46.604266 1161554 main.go:141] libmachine: (embed-certs-372004)     </rng>
	I0908 14:47:46.604273 1161554 main.go:141] libmachine: (embed-certs-372004)     
	I0908 14:47:46.604279 1161554 main.go:141] libmachine: (embed-certs-372004)     
	I0908 14:47:46.604286 1161554 main.go:141] libmachine: (embed-certs-372004)   </devices>
	I0908 14:47:46.604293 1161554 main.go:141] libmachine: (embed-certs-372004) </domain>
	I0908 14:47:46.604305 1161554 main.go:141] libmachine: (embed-certs-372004) 
	I0908 14:47:46.614959 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | domain embed-certs-372004 has defined MAC address 52:54:00:01:62:d7 in network default
	I0908 14:47:46.615798 1161554 main.go:141] libmachine: (embed-certs-372004) starting domain...
	I0908 14:47:46.615819 1161554 main.go:141] libmachine: (embed-certs-372004) ensuring networks are active...
	I0908 14:47:46.615839 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | domain embed-certs-372004 has defined MAC address 52:54:00:a4:7d:d3 in network mk-embed-certs-372004
	I0908 14:47:46.616924 1161554 main.go:141] libmachine: (embed-certs-372004) Ensuring network default is active
	I0908 14:47:46.617295 1161554 main.go:141] libmachine: (embed-certs-372004) Ensuring network mk-embed-certs-372004 is active
	I0908 14:47:46.618335 1161554 main.go:141] libmachine: (embed-certs-372004) getting domain XML...
	I0908 14:47:46.619436 1161554 main.go:141] libmachine: (embed-certs-372004) creating domain...
	I0908 14:47:47.157066 1161554 main.go:141] libmachine: (embed-certs-372004) waiting for IP...
	I0908 14:47:47.157977 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | domain embed-certs-372004 has defined MAC address 52:54:00:a4:7d:d3 in network mk-embed-certs-372004
	I0908 14:47:47.158511 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | unable to find current IP address of domain embed-certs-372004 in network mk-embed-certs-372004
	I0908 14:47:47.158639 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | I0908 14:47:47.158597 1161911 retry.go:31] will retry after 258.261603ms: waiting for domain to come up
	I0908 14:47:47.418495 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | domain embed-certs-372004 has defined MAC address 52:54:00:a4:7d:d3 in network mk-embed-certs-372004
	I0908 14:47:47.419294 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | unable to find current IP address of domain embed-certs-372004 in network mk-embed-certs-372004
	I0908 14:47:47.419330 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | I0908 14:47:47.419241 1161911 retry.go:31] will retry after 241.609497ms: waiting for domain to come up
	I0908 14:47:47.662948 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | domain embed-certs-372004 has defined MAC address 52:54:00:a4:7d:d3 in network mk-embed-certs-372004
	I0908 14:47:47.663597 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | unable to find current IP address of domain embed-certs-372004 in network mk-embed-certs-372004
	I0908 14:47:47.663634 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | I0908 14:47:47.663559 1161911 retry.go:31] will retry after 304.667685ms: waiting for domain to come up
	I0908 14:47:47.970449 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | domain embed-certs-372004 has defined MAC address 52:54:00:a4:7d:d3 in network mk-embed-certs-372004
	I0908 14:47:47.971048 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | unable to find current IP address of domain embed-certs-372004 in network mk-embed-certs-372004
	I0908 14:47:47.971108 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | I0908 14:47:47.971031 1161911 retry.go:31] will retry after 480.152266ms: waiting for domain to come up
	I0908 14:47:48.453029 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | domain embed-certs-372004 has defined MAC address 52:54:00:a4:7d:d3 in network mk-embed-certs-372004
	I0908 14:47:48.453819 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | unable to find current IP address of domain embed-certs-372004 in network mk-embed-certs-372004
	I0908 14:47:48.454035 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | I0908 14:47:48.453910 1161911 retry.go:31] will retry after 680.820573ms: waiting for domain to come up
	I0908 14:47:49.137093 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | domain embed-certs-372004 has defined MAC address 52:54:00:a4:7d:d3 in network mk-embed-certs-372004
	I0908 14:47:49.137654 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | unable to find current IP address of domain embed-certs-372004 in network mk-embed-certs-372004
	I0908 14:47:49.137684 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | I0908 14:47:49.137630 1161911 retry.go:31] will retry after 741.962797ms: waiting for domain to come up
	I0908 14:47:45.543761 1161261 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0908 14:47:45.543805 1161261 machine.go:96] duration metric: took 6.583470839s to provisionDockerMachine
	I0908 14:47:45.543824 1161261 start.go:293] postStartSetup for "pause-120061" (driver="kvm2")
	I0908 14:47:45.543839 1161261 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 14:47:45.543865 1161261 main.go:141] libmachine: (pause-120061) Calling .DriverName
	I0908 14:47:45.544268 1161261 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 14:47:45.544299 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHHostname
	I0908 14:47:45.548239 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:45.548620 1161261 main.go:141] libmachine: (pause-120061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:49:28", ip: ""} in network mk-pause-120061: {Iface:virbr1 ExpiryTime:2025-09-08 15:45:41 +0000 UTC Type:0 Mac:52:54:00:a0:49:28 Iaid: IPaddr:192.168.61.147 Prefix:24 Hostname:pause-120061 Clientid:01:52:54:00:a0:49:28}
	I0908 14:47:45.548665 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined IP address 192.168.61.147 and MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:45.548918 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHPort
	I0908 14:47:45.549128 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHKeyPath
	I0908 14:47:45.549315 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHUsername
	I0908 14:47:45.549481 1161261 sshutil.go:53] new ssh client: &{IP:192.168.61.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/pause-120061/id_rsa Username:docker}
	I0908 14:47:45.651211 1161261 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 14:47:45.658742 1161261 info.go:137] Remote host: Buildroot 2025.02
	I0908 14:47:45.658788 1161261 filesync.go:126] Scanning /home/jenkins/minikube-integration/21508-1116714/.minikube/addons for local assets ...
	I0908 14:47:45.658868 1161261 filesync.go:126] Scanning /home/jenkins/minikube-integration/21508-1116714/.minikube/files for local assets ...
	I0908 14:47:45.658969 1161261 filesync.go:149] local asset: /home/jenkins/minikube-integration/21508-1116714/.minikube/files/etc/ssl/certs/11208752.pem -> 11208752.pem in /etc/ssl/certs
	I0908 14:47:45.659097 1161261 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0908 14:47:45.676039 1161261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/files/etc/ssl/certs/11208752.pem --> /etc/ssl/certs/11208752.pem (1708 bytes)
	I0908 14:47:45.724138 1161261 start.go:296] duration metric: took 180.282144ms for postStartSetup
	I0908 14:47:45.724193 1161261 fix.go:56] duration metric: took 6.789899375s for fixHost
	I0908 14:47:45.724223 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHHostname
	I0908 14:47:45.727807 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:45.728227 1161261 main.go:141] libmachine: (pause-120061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:49:28", ip: ""} in network mk-pause-120061: {Iface:virbr1 ExpiryTime:2025-09-08 15:45:41 +0000 UTC Type:0 Mac:52:54:00:a0:49:28 Iaid: IPaddr:192.168.61.147 Prefix:24 Hostname:pause-120061 Clientid:01:52:54:00:a0:49:28}
	I0908 14:47:45.728256 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined IP address 192.168.61.147 and MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:45.728609 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHPort
	I0908 14:47:45.728821 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHKeyPath
	I0908 14:47:45.728957 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHKeyPath
	I0908 14:47:45.729071 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHUsername
	I0908 14:47:45.729234 1161261 main.go:141] libmachine: Using SSH client type: native
	I0908 14:47:45.729638 1161261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.61.147 22 <nil> <nil>}
	I0908 14:47:45.729654 1161261 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0908 14:47:45.846172 1161261 main.go:141] libmachine: SSH cmd err, output: <nil>: 1757342865.843199249
	
	I0908 14:47:45.846208 1161261 fix.go:216] guest clock: 1757342865.843199249
	I0908 14:47:45.846220 1161261 fix.go:229] Guest: 2025-09-08 14:47:45.843199249 +0000 UTC Remote: 2025-09-08 14:47:45.724198252 +0000 UTC m=+50.631490013 (delta=119.000997ms)
	I0908 14:47:45.846246 1161261 fix.go:200] guest clock delta is within tolerance: 119.000997ms
	I0908 14:47:45.846254 1161261 start.go:83] releasing machines lock for "pause-120061", held for 6.912017635s
	I0908 14:47:45.846294 1161261 main.go:141] libmachine: (pause-120061) Calling .DriverName
	I0908 14:47:45.846620 1161261 main.go:141] libmachine: (pause-120061) Calling .GetIP
	I0908 14:47:45.849936 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:45.850359 1161261 main.go:141] libmachine: (pause-120061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:49:28", ip: ""} in network mk-pause-120061: {Iface:virbr1 ExpiryTime:2025-09-08 15:45:41 +0000 UTC Type:0 Mac:52:54:00:a0:49:28 Iaid: IPaddr:192.168.61.147 Prefix:24 Hostname:pause-120061 Clientid:01:52:54:00:a0:49:28}
	I0908 14:47:45.850429 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined IP address 192.168.61.147 and MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:45.850680 1161261 main.go:141] libmachine: (pause-120061) Calling .DriverName
	I0908 14:47:45.851390 1161261 main.go:141] libmachine: (pause-120061) Calling .DriverName
	I0908 14:47:45.851623 1161261 main.go:141] libmachine: (pause-120061) Calling .DriverName
	I0908 14:47:45.851760 1161261 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0908 14:47:45.851826 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHHostname
	I0908 14:47:45.851903 1161261 ssh_runner.go:195] Run: cat /version.json
	I0908 14:47:45.851933 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHHostname
	I0908 14:47:45.855883 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:45.856051 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:45.856613 1161261 main.go:141] libmachine: (pause-120061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:49:28", ip: ""} in network mk-pause-120061: {Iface:virbr1 ExpiryTime:2025-09-08 15:45:41 +0000 UTC Type:0 Mac:52:54:00:a0:49:28 Iaid: IPaddr:192.168.61.147 Prefix:24 Hostname:pause-120061 Clientid:01:52:54:00:a0:49:28}
	I0908 14:47:45.856683 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined IP address 192.168.61.147 and MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:45.856713 1161261 main.go:141] libmachine: (pause-120061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:49:28", ip: ""} in network mk-pause-120061: {Iface:virbr1 ExpiryTime:2025-09-08 15:45:41 +0000 UTC Type:0 Mac:52:54:00:a0:49:28 Iaid: IPaddr:192.168.61.147 Prefix:24 Hostname:pause-120061 Clientid:01:52:54:00:a0:49:28}
	I0908 14:47:45.856755 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined IP address 192.168.61.147 and MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:45.857042 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHPort
	I0908 14:47:45.857146 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHPort
	I0908 14:47:45.857256 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHKeyPath
	I0908 14:47:45.857456 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHKeyPath
	I0908 14:47:45.857469 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHUsername
	I0908 14:47:45.857681 1161261 sshutil.go:53] new ssh client: &{IP:192.168.61.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/pause-120061/id_rsa Username:docker}
	I0908 14:47:45.858044 1161261 main.go:141] libmachine: (pause-120061) Calling .GetSSHUsername
	I0908 14:47:45.858209 1161261 sshutil.go:53] new ssh client: &{IP:192.168.61.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/pause-120061/id_rsa Username:docker}
	I0908 14:47:45.984024 1161261 ssh_runner.go:195] Run: systemctl --version
	I0908 14:47:45.994417 1161261 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0908 14:47:46.189541 1161261 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0908 14:47:46.205243 1161261 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0908 14:47:46.205348 1161261 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 14:47:46.225389 1161261 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0908 14:47:46.225428 1161261 start.go:495] detecting cgroup driver to use...
	I0908 14:47:46.225519 1161261 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0908 14:47:46.259747 1161261 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 14:47:46.288963 1161261 docker.go:218] disabling cri-docker service (if available) ...
	I0908 14:47:46.289158 1161261 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0908 14:47:46.320181 1161261 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0908 14:47:46.347824 1161261 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0908 14:47:46.556387 1161261 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0908 14:47:46.797576 1161261 docker.go:234] disabling docker service ...
	I0908 14:47:46.797675 1161261 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0908 14:47:46.847535 1161261 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0908 14:47:46.878193 1161261 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0908 14:47:47.161555 1161261 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0908 14:47:47.442372 1161261 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 14:47:47.462302 1161261 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 14:47:47.492084 1161261 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0908 14:47:47.492176 1161261 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 14:47:47.508165 1161261 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0908 14:47:47.508295 1161261 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 14:47:47.528597 1161261 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 14:47:47.546925 1161261 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 14:47:47.563039 1161261 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 14:47:47.583391 1161261 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 14:47:47.598701 1161261 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 14:47:47.619434 1161261 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 14:47:47.641052 1161261 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 14:47:47.654092 1161261 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 14:47:47.668357 1161261 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 14:47:47.985180 1161261 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0908 14:47:51.484903 1161261 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.499673595s)
	I0908 14:47:51.484943 1161261 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0908 14:47:51.485020 1161261 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0908 14:47:51.491847 1161261 start.go:563] Will wait 60s for crictl version
	I0908 14:47:51.491926 1161261 ssh_runner.go:195] Run: which crictl
	I0908 14:47:51.497807 1161261 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 14:47:51.555525 1161261 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0908 14:47:51.555677 1161261 ssh_runner.go:195] Run: crio --version
	I0908 14:47:51.590312 1161261 ssh_runner.go:195] Run: crio --version
	I0908 14:47:51.637110 1161261 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	I0908 14:47:48.523994 1161065 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.0: (3.01130862s)
	I0908 14:47:48.524041 1161065 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.0 from cache
	I0908 14:47:48.524073 1161065 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I0908 14:47:48.524132 1161065 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I0908 14:47:50.824020 1161065 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (2.299841923s)
	I0908 14:47:50.824066 1161065 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I0908 14:47:50.824102 1161065 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.0
	I0908 14:47:50.824159 1161065 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.0
	I0908 14:47:48.658564 1160669 addons.go:514] duration metric: took 3.308950977s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0908 14:47:48.660760 1160669 default_sa.go:45] found service account: "default"
	I0908 14:47:48.660792 1160669 default_sa.go:55] duration metric: took 8.963262ms for default service account to be created ...
	I0908 14:47:48.660806 1160669 system_pods.go:116] waiting for k8s-apps to be running ...
	I0908 14:47:48.670524 1160669 system_pods.go:86] 8 kube-system pods found
	I0908 14:47:48.670572 1160669 system_pods.go:89] "coredns-5dd5756b68-bzzvj" [690695ec-8039-4269-894c-bb8ef49aef3f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:47:48.670582 1160669 system_pods.go:89] "coredns-5dd5756b68-wnv5p" [d97c50cc-9633-4230-b501-5cb90fc1fed6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:47:48.670590 1160669 system_pods.go:89] "etcd-old-k8s-version-454279" [ea25c27d-e993-4978-89bf-8699bd763b8e] Running
	I0908 14:47:48.670599 1160669 system_pods.go:89] "kube-apiserver-old-k8s-version-454279" [93e45f85-1ddb-4873-893b-a0008c4e9e47] Running
	I0908 14:47:48.670606 1160669 system_pods.go:89] "kube-controller-manager-old-k8s-version-454279" [795f0269-31ee-492d-93d4-d58e6378b2a0] Running
	I0908 14:47:48.670614 1160669 system_pods.go:89] "kube-proxy-rjdpq" [4aa93314-791f-4a28-8457-c8c7348a2167] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0908 14:47:48.670624 1160669 system_pods.go:89] "kube-scheduler-old-k8s-version-454279" [451a54a6-51f0-42c8-bde1-99e63b386b9e] Running
	I0908 14:47:48.670632 1160669 system_pods.go:89] "storage-provisioner" [1d11738d-c363-45ab-b2fb-7973140a1b2e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 14:47:48.670671 1160669 retry.go:31] will retry after 205.617344ms: missing components: kube-dns, kube-proxy
	I0908 14:47:48.881680 1160669 system_pods.go:86] 8 kube-system pods found
	I0908 14:47:48.881720 1160669 system_pods.go:89] "coredns-5dd5756b68-bzzvj" [690695ec-8039-4269-894c-bb8ef49aef3f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:47:48.881733 1160669 system_pods.go:89] "coredns-5dd5756b68-wnv5p" [d97c50cc-9633-4230-b501-5cb90fc1fed6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:47:48.881739 1160669 system_pods.go:89] "etcd-old-k8s-version-454279" [ea25c27d-e993-4978-89bf-8699bd763b8e] Running
	I0908 14:47:48.881744 1160669 system_pods.go:89] "kube-apiserver-old-k8s-version-454279" [93e45f85-1ddb-4873-893b-a0008c4e9e47] Running
	I0908 14:47:48.881750 1160669 system_pods.go:89] "kube-controller-manager-old-k8s-version-454279" [795f0269-31ee-492d-93d4-d58e6378b2a0] Running
	I0908 14:47:48.881755 1160669 system_pods.go:89] "kube-proxy-rjdpq" [4aa93314-791f-4a28-8457-c8c7348a2167] Running
	I0908 14:47:48.881760 1160669 system_pods.go:89] "kube-scheduler-old-k8s-version-454279" [451a54a6-51f0-42c8-bde1-99e63b386b9e] Running
	I0908 14:47:48.881767 1160669 system_pods.go:89] "storage-provisioner" [1d11738d-c363-45ab-b2fb-7973140a1b2e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 14:47:48.881779 1160669 system_pods.go:126] duration metric: took 220.96307ms to wait for k8s-apps to be running ...
	I0908 14:47:48.881795 1160669 system_svc.go:44] waiting for kubelet service to be running ....
	I0908 14:47:48.881855 1160669 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 14:47:48.901704 1160669 system_svc.go:56] duration metric: took 19.896589ms WaitForService to wait for kubelet
	I0908 14:47:48.901746 1160669 kubeadm.go:578] duration metric: took 3.552161714s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 14:47:48.901771 1160669 node_conditions.go:102] verifying NodePressure condition ...
	I0908 14:47:48.907134 1160669 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0908 14:47:48.907167 1160669 node_conditions.go:123] node cpu capacity is 2
	I0908 14:47:48.907182 1160669 node_conditions.go:105] duration metric: took 5.402366ms to run NodePressure ...
	I0908 14:47:48.907199 1160669 start.go:241] waiting for startup goroutines ...
	I0908 14:47:48.925974 1160669 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-454279" context rescaled to 1 replicas
	I0908 14:47:48.926019 1160669 start.go:246] waiting for cluster config update ...
	I0908 14:47:48.926056 1160669 start.go:255] writing updated cluster config ...
	I0908 14:47:48.926406 1160669 ssh_runner.go:195] Run: rm -f paused
	I0908 14:47:48.935151 1160669 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 14:47:48.946541 1160669 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-bzzvj" in "kube-system" namespace to be "Ready" or be gone ...
	W0908 14:47:50.955115 1160669 pod_ready.go:104] pod "coredns-5dd5756b68-bzzvj" is not "Ready", error: <nil>
	W0908 14:47:52.955892 1160669 pod_ready.go:104] pod "coredns-5dd5756b68-bzzvj" is not "Ready", error: <nil>
	I0908 14:47:49.881971 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | domain embed-certs-372004 has defined MAC address 52:54:00:a4:7d:d3 in network mk-embed-certs-372004
	I0908 14:47:49.882496 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | unable to find current IP address of domain embed-certs-372004 in network mk-embed-certs-372004
	I0908 14:47:49.882571 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | I0908 14:47:49.882485 1161911 retry.go:31] will retry after 1.068110411s: waiting for domain to come up
	I0908 14:47:50.952070 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | domain embed-certs-372004 has defined MAC address 52:54:00:a4:7d:d3 in network mk-embed-certs-372004
	I0908 14:47:50.952673 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | unable to find current IP address of domain embed-certs-372004 in network mk-embed-certs-372004
	I0908 14:47:50.952699 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | I0908 14:47:50.952645 1161911 retry.go:31] will retry after 975.337887ms: waiting for domain to come up
	I0908 14:47:51.931801 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | domain embed-certs-372004 has defined MAC address 52:54:00:a4:7d:d3 in network mk-embed-certs-372004
	I0908 14:47:51.932502 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | unable to find current IP address of domain embed-certs-372004 in network mk-embed-certs-372004
	I0908 14:47:51.932557 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | I0908 14:47:51.932480 1161911 retry.go:31] will retry after 1.756101885s: waiting for domain to come up
	I0908 14:47:53.691128 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | domain embed-certs-372004 has defined MAC address 52:54:00:a4:7d:d3 in network mk-embed-certs-372004
	I0908 14:47:53.691920 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | unable to find current IP address of domain embed-certs-372004 in network mk-embed-certs-372004
	I0908 14:47:53.692141 1161554 main.go:141] libmachine: (embed-certs-372004) DBG | I0908 14:47:53.692087 1161911 retry.go:31] will retry after 1.815249423s: waiting for domain to come up
	I0908 14:47:51.638446 1161261 main.go:141] libmachine: (pause-120061) Calling .GetIP
	I0908 14:47:51.642263 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:51.642744 1161261 main.go:141] libmachine: (pause-120061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:49:28", ip: ""} in network mk-pause-120061: {Iface:virbr1 ExpiryTime:2025-09-08 15:45:41 +0000 UTC Type:0 Mac:52:54:00:a0:49:28 Iaid: IPaddr:192.168.61.147 Prefix:24 Hostname:pause-120061 Clientid:01:52:54:00:a0:49:28}
	I0908 14:47:51.642776 1161261 main.go:141] libmachine: (pause-120061) DBG | domain pause-120061 has defined IP address 192.168.61.147 and MAC address 52:54:00:a0:49:28 in network mk-pause-120061
	I0908 14:47:51.643169 1161261 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0908 14:47:51.649711 1161261 kubeadm.go:875] updating cluster {Name:pause-120061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0
ClusterName:pause-120061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.147 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0908 14:47:51.649917 1161261 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 14:47:51.649988 1161261 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 14:47:51.704103 1161261 crio.go:514] all images are preloaded for cri-o runtime.
	I0908 14:47:51.704142 1161261 crio.go:433] Images already preloaded, skipping extraction
	I0908 14:47:51.704223 1161261 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 14:47:51.748253 1161261 crio.go:514] all images are preloaded for cri-o runtime.
	I0908 14:47:51.748292 1161261 cache_images.go:85] Images are preloaded, skipping loading
	I0908 14:47:51.748303 1161261 kubeadm.go:926] updating node { 192.168.61.147 8443 v1.34.0 crio true true} ...
	I0908 14:47:51.748454 1161261 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-120061 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.147
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:pause-120061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0908 14:47:51.748544 1161261 ssh_runner.go:195] Run: crio config
	I0908 14:47:51.824864 1161261 cni.go:84] Creating CNI manager for ""
	I0908 14:47:51.824905 1161261 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0908 14:47:51.824923 1161261 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0908 14:47:51.824965 1161261 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.147 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-120061 NodeName:pause-120061 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.147"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.147 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0908 14:47:51.825192 1161261 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.147
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-120061"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.147"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.147"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0908 14:47:51.825283 1161261 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0908 14:47:51.846600 1161261 binaries.go:44] Found k8s binaries, skipping transfer
	I0908 14:47:51.846699 1161261 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0908 14:47:51.862367 1161261 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0908 14:47:51.890754 1161261 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0908 14:47:51.921238 1161261 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I0908 14:47:51.949413 1161261 ssh_runner.go:195] Run: grep 192.168.61.147	control-plane.minikube.internal$ /etc/hosts
	I0908 14:47:51.955910 1161261 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 14:47:52.155633 1161261 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 14:47:52.176352 1161261 certs.go:68] Setting up /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/pause-120061 for IP: 192.168.61.147
	I0908 14:47:52.176384 1161261 certs.go:194] generating shared ca certs ...
	I0908 14:47:52.176403 1161261 certs.go:226] acquiring lock for ca certs: {Name:mk10dcd85eee4d8b0413bd848f61156bf964b6ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:47:52.176662 1161261 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21508-1116714/.minikube/ca.key
	I0908 14:47:52.176721 1161261 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21508-1116714/.minikube/proxy-client-ca.key
	I0908 14:47:52.176735 1161261 certs.go:256] generating profile certs ...
	I0908 14:47:52.176854 1161261 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/pause-120061/client.key
	I0908 14:47:52.176942 1161261 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/pause-120061/apiserver.key.71e213e0
	I0908 14:47:52.177028 1161261 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/pause-120061/proxy-client.key
	I0908 14:47:52.177196 1161261 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/1120875.pem (1338 bytes)
	W0908 14:47:52.177239 1161261 certs.go:480] ignoring /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/1120875_empty.pem, impossibly tiny 0 bytes
	I0908 14:47:52.177253 1161261 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca-key.pem (1679 bytes)
	I0908 14:47:52.177292 1161261 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/ca.pem (1082 bytes)
	I0908 14:47:52.177334 1161261 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/cert.pem (1123 bytes)
	I0908 14:47:52.177362 1161261 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/key.pem (1675 bytes)
	I0908 14:47:52.177417 1161261 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-1116714/.minikube/files/etc/ssl/certs/11208752.pem (1708 bytes)
	I0908 14:47:52.178125 1161261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0908 14:47:52.216860 1161261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0908 14:47:52.264992 1161261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0908 14:47:52.315906 1161261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0908 14:47:52.366512 1161261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/pause-120061/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0908 14:47:52.407534 1161261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/pause-120061/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0908 14:47:52.457127 1161261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/pause-120061/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0908 14:47:52.505152 1161261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/pause-120061/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0908 14:47:52.549547 1161261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/files/etc/ssl/certs/11208752.pem --> /usr/share/ca-certificates/11208752.pem (1708 bytes)
	I0908 14:47:52.588151 1161261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0908 14:47:52.629239 1161261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1116714/.minikube/certs/1120875.pem --> /usr/share/ca-certificates/1120875.pem (1338 bytes)
	I0908 14:47:52.666334 1161261 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0908 14:47:52.692809 1161261 ssh_runner.go:195] Run: openssl version
	I0908 14:47:52.700407 1161261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11208752.pem && ln -fs /usr/share/ca-certificates/11208752.pem /etc/ssl/certs/11208752.pem"
	I0908 14:47:52.717734 1161261 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11208752.pem
	I0908 14:47:52.725301 1161261 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  8 13:46 /usr/share/ca-certificates/11208752.pem
	I0908 14:47:52.725396 1161261 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11208752.pem
	I0908 14:47:52.735515 1161261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11208752.pem /etc/ssl/certs/3ec20f2e.0"
	I0908 14:47:52.751195 1161261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0908 14:47:52.769652 1161261 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0908 14:47:52.777129 1161261 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  8 13:35 /usr/share/ca-certificates/minikubeCA.pem
	I0908 14:47:52.777209 1161261 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0908 14:47:52.787042 1161261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0908 14:47:52.803329 1161261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1120875.pem && ln -fs /usr/share/ca-certificates/1120875.pem /etc/ssl/certs/1120875.pem"
	I0908 14:47:52.822959 1161261 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1120875.pem
	I0908 14:47:52.831158 1161261 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  8 13:46 /usr/share/ca-certificates/1120875.pem
	I0908 14:47:52.831251 1161261 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1120875.pem
	I0908 14:47:52.848780 1161261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1120875.pem /etc/ssl/certs/51391683.0"
	I0908 14:47:52.910305 1161261 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0908 14:47:52.947063 1161261 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0908 14:47:52.980746 1161261 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0908 14:47:53.017172 1161261 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0908 14:47:53.029502 1161261 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0908 14:47:53.050518 1161261 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0908 14:47:53.066057 1161261 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0908 14:47:53.090136 1161261 kubeadm.go:392] StartCluster: {Name:pause-120061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 Cl
usterName:pause-120061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.147 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 14:47:53.090336 1161261 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0908 14:47:53.090436 1161261 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0908 14:47:53.258288 1161261 cri.go:89] found id: "f396885ab602525616471c4a3078ab5befab72cec72eb50c586e5eb321dbf922"
	I0908 14:47:53.258340 1161261 cri.go:89] found id: "6f6f4bdc578435a925c85945bddfe6a5ac8b51b3cc376b776a33a1d585bd2c29"
	I0908 14:47:53.258348 1161261 cri.go:89] found id: "6936912d89250ecd151886026e92e7d034661849c0bfab75a31547b61a0fe66a"
	I0908 14:47:53.258352 1161261 cri.go:89] found id: "ee305c82781917bfbaab4b509ef785aeb3b96bd60c2ec05530b1c3d48a225512"
	I0908 14:47:53.258356 1161261 cri.go:89] found id: "06f87ac3295d31633f69192af6ed4823f0bf18648983434dcaa6db09d069d6bd"
	I0908 14:47:53.258361 1161261 cri.go:89] found id: "8ed8110fce0f009048f3aca5ce0a9a67946864f102d5a3e3a5da1c1053c5cb04"
	I0908 14:47:53.258366 1161261 cri.go:89] found id: ""
	I0908 14:47:53.258430 1161261 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-120061 -n pause-120061
helpers_test.go:269: (dbg) Run:  kubectl --context pause-120061 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (89.40s)

                                                
                                    

Test pass (278/324)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 9.21
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.16
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.0/json-events 5.18
13 TestDownloadOnly/v1.34.0/preload-exists 0
17 TestDownloadOnly/v1.34.0/LogsDuration 0.07
18 TestDownloadOnly/v1.34.0/DeleteAll 0.15
19 TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds 0.15
21 TestBinaryMirror 0.67
22 TestOffline 141.12
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 208.73
31 TestAddons/serial/GCPAuth/Namespaces 0.15
32 TestAddons/serial/GCPAuth/FakeCredentials 10.58
35 TestAddons/parallel/Registry 16.83
36 TestAddons/parallel/RegistryCreds 0.99
38 TestAddons/parallel/InspektorGadget 6.33
39 TestAddons/parallel/MetricsServer 7.53
41 TestAddons/parallel/CSI 50.98
42 TestAddons/parallel/Headlamp 23.69
43 TestAddons/parallel/CloudSpanner 6.67
44 TestAddons/parallel/LocalPath 10.22
45 TestAddons/parallel/NvidiaDevicePlugin 6.83
46 TestAddons/parallel/Yakd 12.42
48 TestAddons/StoppedEnableDisable 91.22
49 TestCertOptions 74.81
50 TestCertExpiration 319.91
52 TestForceSystemdFlag 53.5
53 TestForceSystemdEnv 45.53
55 TestKVMDriverInstallOrUpdate 1.42
59 TestErrorSpam/setup 44.08
60 TestErrorSpam/start 0.41
61 TestErrorSpam/status 0.89
62 TestErrorSpam/pause 1.89
63 TestErrorSpam/unpause 2.17
64 TestErrorSpam/stop 93.16
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 95.81
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 45.57
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.08
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.52
76 TestFunctional/serial/CacheCmd/cache/add_local 1.2
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
78 TestFunctional/serial/CacheCmd/cache/list 0.06
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.87
81 TestFunctional/serial/CacheCmd/cache/delete 0.12
82 TestFunctional/serial/MinikubeKubectlCmd 0.13
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
84 TestFunctional/serial/ExtraConfig 35.84
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.69
87 TestFunctional/serial/LogsFileCmd 1.66
88 TestFunctional/serial/InvalidService 4.11
90 TestFunctional/parallel/ConfigCmd 0.42
91 TestFunctional/parallel/DashboardCmd 42.39
92 TestFunctional/parallel/DryRun 0.33
93 TestFunctional/parallel/InternationalLanguage 0.18
94 TestFunctional/parallel/StatusCmd 1.04
98 TestFunctional/parallel/ServiceCmdConnect 7.7
99 TestFunctional/parallel/AddonsCmd 0.15
100 TestFunctional/parallel/PersistentVolumeClaim 46.57
102 TestFunctional/parallel/SSHCmd 0.53
103 TestFunctional/parallel/CpCmd 1.62
104 TestFunctional/parallel/MySQL 31.59
105 TestFunctional/parallel/FileSync 0.31
106 TestFunctional/parallel/CertSync 1.38
110 TestFunctional/parallel/NodeLabels 0.08
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.52
114 TestFunctional/parallel/License 0.29
116 TestFunctional/parallel/ImageCommands/ImageListTable 0.37
117 TestFunctional/parallel/ImageCommands/ImageListJson 1.13
120 TestFunctional/parallel/ImageCommands/Setup 0.46
121 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.93
122 TestFunctional/parallel/ServiceCmd/DeployApp 9.24
132 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.98
133 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.09
134 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.56
135 TestFunctional/parallel/ImageCommands/ImageRemove 0.6
136 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.78
137 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.6
138 TestFunctional/parallel/UpdateContextCmd/no_changes 0.12
139 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.13
140 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.12
141 TestFunctional/parallel/ProfileCmd/profile_not_create 0.49
142 TestFunctional/parallel/MountCmd/any-port 14.95
143 TestFunctional/parallel/ServiceCmd/List 0.34
144 TestFunctional/parallel/ProfileCmd/profile_list 0.46
145 TestFunctional/parallel/ServiceCmd/JSONOutput 0.46
146 TestFunctional/parallel/ProfileCmd/profile_json_output 0.53
147 TestFunctional/parallel/ServiceCmd/HTTPS 0.58
148 TestFunctional/parallel/ServiceCmd/Format 0.51
149 TestFunctional/parallel/ServiceCmd/URL 0.47
150 TestFunctional/parallel/Version/short 0.06
151 TestFunctional/parallel/Version/components 0.74
152 TestFunctional/parallel/MountCmd/specific-port 2.05
153 TestFunctional/parallel/MountCmd/VerifyCleanup 1.74
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 275.09
162 TestMultiControlPlane/serial/DeployApp 6.37
163 TestMultiControlPlane/serial/PingHostFromPods 1.42
164 TestMultiControlPlane/serial/AddWorkerNode 57.77
165 TestMultiControlPlane/serial/NodeLabels 0.09
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.03
167 TestMultiControlPlane/serial/CopyFile 14.72
168 TestMultiControlPlane/serial/StopSecondaryNode 91.5
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.77
170 TestMultiControlPlane/serial/RestartSecondaryNode 37.65
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.05
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 472.89
173 TestMultiControlPlane/serial/DeleteSecondaryNode 19.34
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.73
175 TestMultiControlPlane/serial/StopCluster 272.91
176 TestMultiControlPlane/serial/RestartCluster 115.38
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.73
178 TestMultiControlPlane/serial/AddSecondaryNode 115.26
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.98
183 TestJSONOutput/start/Command 88.91
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.86
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.78
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 7.38
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.23
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 99.72
215 TestMountStart/serial/StartWithMountFirst 28.89
216 TestMountStart/serial/VerifyMountFirst 0.41
217 TestMountStart/serial/StartWithMountSecond 29.72
218 TestMountStart/serial/VerifyMountSecond 0.42
219 TestMountStart/serial/DeleteFirst 0.63
220 TestMountStart/serial/VerifyMountPostDelete 0.41
221 TestMountStart/serial/Stop 2.32
222 TestMountStart/serial/RestartStopped 24.01
223 TestMountStart/serial/VerifyMountPostStop 0.43
226 TestMultiNode/serial/FreshStart2Nodes 114.71
227 TestMultiNode/serial/DeployApp2Nodes 4.62
228 TestMultiNode/serial/PingHostFrom2Pods 0.89
229 TestMultiNode/serial/AddNode 49.24
230 TestMultiNode/serial/MultiNodeLabels 0.07
231 TestMultiNode/serial/ProfileList 0.66
232 TestMultiNode/serial/CopyFile 8.26
233 TestMultiNode/serial/StopNode 3.29
234 TestMultiNode/serial/StartAfterStop 38.25
235 TestMultiNode/serial/RestartKeepsNodes 349.83
236 TestMultiNode/serial/DeleteNode 2.86
237 TestMultiNode/serial/StopMultiNode 181.77
238 TestMultiNode/serial/RestartMultiNode 136.7
239 TestMultiNode/serial/ValidateNameConflict 47.42
246 TestScheduledStopUnix 122.31
250 TestRunningBinaryUpgrade 108.11
252 TestKubernetesUpgrade 255.62
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
256 TestNoKubernetes/serial/StartWithK8s 121.31
257 TestNoKubernetes/serial/StartWithStopK8s 9.37
258 TestNoKubernetes/serial/Start 27.93
259 TestStoppedBinaryUpgrade/Setup 0.61
260 TestStoppedBinaryUpgrade/Upgrade 125.5
261 TestNoKubernetes/serial/VerifyK8sNotRunning 0.22
262 TestNoKubernetes/serial/ProfileList 1.19
263 TestNoKubernetes/serial/Stop 1.4
264 TestNoKubernetes/serial/StartNoArgs 68.28
265 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.23
266 TestStoppedBinaryUpgrade/MinikubeLogs 1.44
275 TestPause/serial/Start 111.01
283 TestNetworkPlugins/group/false 3.61
288 TestStartStop/group/old-k8s-version/serial/FirstStart 135.22
290 TestStartStop/group/no-preload/serial/FirstStart 144.26
293 TestStartStop/group/embed-certs/serial/FirstStart 138.24
295 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 85.28
296 TestStartStop/group/old-k8s-version/serial/DeployApp 9.77
297 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.59
298 TestStartStop/group/old-k8s-version/serial/Stop 91.05
299 TestStartStop/group/no-preload/serial/DeployApp 9.35
300 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.36
301 TestStartStop/group/no-preload/serial/Stop 90.98
302 TestStartStop/group/embed-certs/serial/DeployApp 9.31
303 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.19
304 TestStartStop/group/embed-certs/serial/Stop 90.97
305 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.3
306 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.09
307 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.47
308 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
309 TestStartStop/group/old-k8s-version/serial/SecondStart 51.91
310 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
311 TestStartStop/group/no-preload/serial/SecondStart 65.7
312 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 10.01
313 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.26
314 TestStartStop/group/embed-certs/serial/SecondStart 56.36
315 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.12
316 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.31
317 TestStartStop/group/old-k8s-version/serial/Pause 4.1
319 TestStartStop/group/newest-cni/serial/FirstStart 72.47
320 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.24
321 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 88.26
322 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 9.01
323 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
324 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
325 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.29
326 TestStartStop/group/no-preload/serial/Pause 4.91
327 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
328 TestNetworkPlugins/group/auto/Start 111.72
329 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.31
330 TestStartStop/group/embed-certs/serial/Pause 3.69
331 TestNetworkPlugins/group/kindnet/Start 134.1
332 TestStartStop/group/newest-cni/serial/DeployApp 0
333 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.94
334 TestStartStop/group/newest-cni/serial/Stop 9.57
335 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.28
336 TestStartStop/group/newest-cni/serial/SecondStart 78.59
337 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
338 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
339 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
340 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.38
341 TestNetworkPlugins/group/calico/Start 112.32
342 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
343 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
344 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.37
345 TestNetworkPlugins/group/auto/KubeletFlags 0.28
346 TestStartStop/group/newest-cni/serial/Pause 4.13
347 TestNetworkPlugins/group/auto/NetCatPod 10.3
348 TestNetworkPlugins/group/custom-flannel/Start 82.28
349 TestNetworkPlugins/group/auto/DNS 0.2
350 TestNetworkPlugins/group/auto/Localhost 0.17
351 TestNetworkPlugins/group/auto/HairPin 0.18
352 TestNetworkPlugins/group/enable-default-cni/Start 104.06
353 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
354 TestNetworkPlugins/group/kindnet/KubeletFlags 0.26
355 TestNetworkPlugins/group/kindnet/NetCatPod 11.35
356 TestNetworkPlugins/group/kindnet/DNS 0.26
357 TestNetworkPlugins/group/kindnet/Localhost 0.2
358 TestNetworkPlugins/group/kindnet/HairPin 0.23
359 TestNetworkPlugins/group/calico/ControllerPod 6.01
360 TestNetworkPlugins/group/flannel/Start 90.19
361 TestNetworkPlugins/group/calico/KubeletFlags 0.32
362 TestNetworkPlugins/group/calico/NetCatPod 14.41
363 TestNetworkPlugins/group/calico/DNS 0.2
364 TestNetworkPlugins/group/calico/Localhost 0.16
365 TestNetworkPlugins/group/calico/HairPin 0.18
366 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.27
367 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.3
368 TestNetworkPlugins/group/custom-flannel/DNS 0.22
369 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
370 TestNetworkPlugins/group/custom-flannel/HairPin 0.18
371 TestNetworkPlugins/group/bridge/Start 99.13
372 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.24
373 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.28
374 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
375 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
376 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
377 TestNetworkPlugins/group/flannel/ControllerPod 6.01
378 TestNetworkPlugins/group/flannel/KubeletFlags 0.25
379 TestNetworkPlugins/group/flannel/NetCatPod 10.29
380 TestNetworkPlugins/group/flannel/DNS 0.17
381 TestNetworkPlugins/group/flannel/Localhost 0.14
382 TestNetworkPlugins/group/flannel/HairPin 0.2
383 TestNetworkPlugins/group/bridge/KubeletFlags 0.24
384 TestNetworkPlugins/group/bridge/NetCatPod 10.26
385 TestNetworkPlugins/group/bridge/DNS 0.17
386 TestNetworkPlugins/group/bridge/Localhost 0.13
387 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.28.0/json-events (9.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-345918 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-345918 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (9.212380198s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (9.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0908 13:35:17.026268 1120875 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
I0908 13:35:17.026377 1120875 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-345918
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-345918: exit status 85 (74.720507ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-345918 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-345918 │ jenkins │ v1.36.0 │ 08 Sep 25 13:35 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 13:35:07
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 13:35:07.861339 1120887 out.go:360] Setting OutFile to fd 1 ...
	I0908 13:35:07.861650 1120887 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:35:07.861662 1120887 out.go:374] Setting ErrFile to fd 2...
	I0908 13:35:07.861667 1120887 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:35:07.861880 1120887 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-1116714/.minikube/bin
	W0908 13:35:07.862013 1120887 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21508-1116714/.minikube/config/config.json: open /home/jenkins/minikube-integration/21508-1116714/.minikube/config/config.json: no such file or directory
	I0908 13:35:07.862638 1120887 out.go:368] Setting JSON to true
	I0908 13:35:07.863786 1120887 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":15452,"bootTime":1757323056,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 13:35:07.863931 1120887 start.go:140] virtualization: kvm guest
	I0908 13:35:07.866896 1120887 out.go:99] [download-only-345918] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	W0908 13:35:07.867142 1120887 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/preloaded-tarball: no such file or directory
	I0908 13:35:07.867192 1120887 notify.go:220] Checking for updates...
	I0908 13:35:07.868904 1120887 out.go:171] MINIKUBE_LOCATION=21508
	I0908 13:35:07.870624 1120887 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 13:35:07.872081 1120887 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21508-1116714/kubeconfig
	I0908 13:35:07.873356 1120887 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-1116714/.minikube
	I0908 13:35:07.874767 1120887 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0908 13:35:07.877165 1120887 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0908 13:35:07.877428 1120887 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 13:35:07.916480 1120887 out.go:99] Using the kvm2 driver based on user configuration
	I0908 13:35:07.916529 1120887 start.go:304] selected driver: kvm2
	I0908 13:35:07.916537 1120887 start.go:918] validating driver "kvm2" against <nil>
	I0908 13:35:07.916909 1120887 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 13:35:07.917024 1120887 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21508-1116714/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	W0908 13:35:07.921096 1120887 install.go:62] docker-machine-driver-kvm2: exit status 1
	I0908 13:35:07.922763 1120887 out.go:99] Downloading driver docker-machine-driver-kvm2:
	I0908 13:35:07.922899 1120887 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.36.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.36.0/docker-machine-driver-kvm2-amd64.sha256 -> /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 13:35:08.303816 1120887 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0908 13:35:08.304428 1120887 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32089MB, container=0MB
	I0908 13:35:08.304573 1120887 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0908 13:35:08.304611 1120887 cni.go:84] Creating CNI manager for ""
	I0908 13:35:08.304661 1120887 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0908 13:35:08.304671 1120887 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0908 13:35:08.304735 1120887 start.go:348] cluster config:
	{Name:download-only-345918 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-345918 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 13:35:08.304920 1120887 iso.go:125] acquiring lock: {Name:mk347390bf24761f2c39bf4cd5b718f157a50faa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 13:35:08.307135 1120887 out.go:99] Downloading VM boot image ...
	I0908 13:35:08.307192 1120887 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/iso/amd64/minikube-v1.36.0-1756980912-21488-amd64.iso
	I0908 13:35:11.763762 1120887 out.go:99] Starting "download-only-345918" primary control-plane node in "download-only-345918" cluster
	I0908 13:35:11.763801 1120887 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0908 13:35:11.786698 1120887 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I0908 13:35:11.786742 1120887 cache.go:58] Caching tarball of preloaded images
	I0908 13:35:11.786896 1120887 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0908 13:35:11.788726 1120887 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I0908 13:35:11.788752 1120887 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 ...
	I0908 13:35:11.814711 1120887 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-345918 host does not exist
	  To start a cluster, run: "minikube start -p download-only-345918"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-345918
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/json-events (5.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-419467 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-419467 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (5.18093205s)
--- PASS: TestDownloadOnly/v1.34.0/json-events (5.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/preload-exists
I0908 13:35:22.591757 1120875 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
I0908 13:35:22.591803 1120875 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-419467
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-419467: exit status 85 (74.297408ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-345918 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-345918 │ jenkins │ v1.36.0 │ 08 Sep 25 13:35 UTC │                     │
	│ delete  │ --all                                                                                                                                                                   │ minikube             │ jenkins │ v1.36.0 │ 08 Sep 25 13:35 UTC │ 08 Sep 25 13:35 UTC │
	│ delete  │ -p download-only-345918                                                                                                                                                 │ download-only-345918 │ jenkins │ v1.36.0 │ 08 Sep 25 13:35 UTC │ 08 Sep 25 13:35 UTC │
	│ start   │ -o=json --download-only -p download-only-419467 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-419467 │ jenkins │ v1.36.0 │ 08 Sep 25 13:35 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 13:35:17
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 13:35:17.458320 1121079 out.go:360] Setting OutFile to fd 1 ...
	I0908 13:35:17.458566 1121079 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:35:17.458576 1121079 out.go:374] Setting ErrFile to fd 2...
	I0908 13:35:17.458581 1121079 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:35:17.458831 1121079 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-1116714/.minikube/bin
	I0908 13:35:17.459508 1121079 out.go:368] Setting JSON to true
	I0908 13:35:17.460509 1121079 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":15461,"bootTime":1757323056,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 13:35:17.460630 1121079 start.go:140] virtualization: kvm guest
	I0908 13:35:17.462713 1121079 out.go:99] [download-only-419467] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0908 13:35:17.462924 1121079 notify.go:220] Checking for updates...
	I0908 13:35:17.464305 1121079 out.go:171] MINIKUBE_LOCATION=21508
	I0908 13:35:17.465903 1121079 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 13:35:17.467612 1121079 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21508-1116714/kubeconfig
	I0908 13:35:17.469396 1121079 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-1116714/.minikube
	I0908 13:35:17.470875 1121079 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0908 13:35:17.473503 1121079 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0908 13:35:17.473843 1121079 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 13:35:17.509554 1121079 out.go:99] Using the kvm2 driver based on user configuration
	I0908 13:35:17.509599 1121079 start.go:304] selected driver: kvm2
	I0908 13:35:17.509609 1121079 start.go:918] validating driver "kvm2" against <nil>
	I0908 13:35:17.509955 1121079 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 13:35:17.510071 1121079 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21508-1116714/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0908 13:35:17.527413 1121079 install.go:137] /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0908 13:35:17.527499 1121079 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0908 13:35:17.528151 1121079 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32089MB, container=0MB
	I0908 13:35:17.528311 1121079 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0908 13:35:17.528343 1121079 cni.go:84] Creating CNI manager for ""
	I0908 13:35:17.528388 1121079 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0908 13:35:17.528397 1121079 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0908 13:35:17.528458 1121079 start.go:348] cluster config:
	{Name:download-only-419467 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:download-only-419467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 13:35:17.528554 1121079 iso.go:125] acquiring lock: {Name:mk347390bf24761f2c39bf4cd5b718f157a50faa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 13:35:17.530471 1121079 out.go:99] Starting "download-only-419467" primary control-plane node in "download-only-419467" cluster
	I0908 13:35:17.530494 1121079 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 13:35:17.551698 1121079 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0908 13:35:17.551755 1121079 cache.go:58] Caching tarball of preloaded images
	I0908 13:35:17.551975 1121079 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 13:35:17.554105 1121079 out.go:99] Downloading Kubernetes v1.34.0 preload ...
	I0908 13:35:17.554126 1121079 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 ...
	I0908 13:35:17.580682 1121079 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2ff28357f4fb6607eaee8f503f8804cd -> /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0908 13:35:21.251368 1121079 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 ...
	I0908 13:35:21.251481 1121079 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 ...
	I0908 13:35:22.077533 1121079 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0908 13:35:22.077956 1121079 profile.go:143] Saving config to /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/download-only-419467/config.json ...
	I0908 13:35:22.077995 1121079 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/download-only-419467/config.json: {Name:mka1fa0235f4caf6551029153f784ba030bd2bc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:35:22.078206 1121079 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 13:35:22.078410 1121079 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/21508-1116714/.minikube/cache/linux/amd64/v1.34.0/kubectl
	
	
	* The control-plane node download-only-419467 host does not exist
	  To start a cluster, run: "minikube start -p download-only-419467"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-419467
--- PASS: TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.67s)

                                                
                                                
=== RUN   TestBinaryMirror
I0908 13:35:23.260312 1120875 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-492864 --alsologtostderr --binary-mirror http://127.0.0.1:42959 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-492864" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-492864
--- PASS: TestBinaryMirror (0.67s)

                                                
                                    
x
+
TestOffline (141.12s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-894240 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-894240 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (2m20.090219558s)
helpers_test.go:175: Cleaning up "offline-crio-894240" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-894240
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-894240: (1.029577677s)
--- PASS: TestOffline (141.12s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-674449
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-674449: exit status 85 (62.550894ms)

                                                
                                                
-- stdout --
	* Profile "addons-674449" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-674449"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-674449
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-674449: exit status 85 (62.223937ms)

                                                
                                                
-- stdout --
	* Profile "addons-674449" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-674449"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (208.73s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-674449 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-674449 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m28.728860509s)
--- PASS: TestAddons/Setup (208.73s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-674449 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-674449 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.58s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-674449 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-674449 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f965c06e-67a7-4092-9b85-b30957e5cec1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f965c06e-67a7-4092-9b85-b30957e5cec1] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.007895959s
addons_test.go:694: (dbg) Run:  kubectl --context addons-674449 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-674449 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-674449 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.58s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.83s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 8.592606ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-gc8hq" [a413903f-bf54-4cd9-a1c0-7a955a711b5d] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005577501s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-7ngm4" [d9cfc107-6d8d-4cc3-9a3b-165b7418c9a1] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.006028516s
addons_test.go:392: (dbg) Run:  kubectl --context addons-674449 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-674449 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-674449 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.851079172s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-674449 ip
2025/09/08 13:39:28 [DEBUG] GET http://192.168.39.135:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-674449 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.83s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.99s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 6.442111ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-674449
addons_test.go:332: (dbg) Run:  kubectl --context addons-674449 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-674449 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.99s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.33s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-84fdd" [310f336f-3bf3-4b6d-898f-9ca64c3c855b] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004705507s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-674449 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.33s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.53s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 9.736098ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-vpjdn" [bfb3b498-93a0-4972-8c87-5ed48139b3d8] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.007581415s
addons_test.go:463: (dbg) Run:  kubectl --context addons-674449 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-674449 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-674449 addons disable metrics-server --alsologtostderr -v=1: (1.412633741s)
--- PASS: TestAddons/parallel/MetricsServer (7.53s)

                                                
                                    
x
+
TestAddons/parallel/CSI (50.98s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0908 13:39:24.754091 1120875 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 8.746016ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-674449 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-674449 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-674449 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-674449 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-674449 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-674449 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-674449 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-674449 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [a2e992f4-3497-4f7d-8a92-82dd0c4fa6c1] Pending
helpers_test.go:352: "task-pv-pod" [a2e992f4-3497-4f7d-8a92-82dd0c4fa6c1] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [a2e992f4-3497-4f7d-8a92-82dd0c4fa6c1] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 22.003874761s
addons_test.go:572: (dbg) Run:  kubectl --context addons-674449 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-674449 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:435: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:427: (dbg) Run:  kubectl --context addons-674449 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-674449 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-674449 delete pod task-pv-pod: (1.510378093s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-674449 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-674449 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-674449 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-674449 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-674449 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-674449 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-674449 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [c0be8bcb-7c5c-4a80-89f4-871f4a80b093] Pending
helpers_test.go:352: "task-pv-pod-restore" [c0be8bcb-7c5c-4a80-89f4-871f4a80b093] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [c0be8bcb-7c5c-4a80-89f4-871f4a80b093] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.006353523s
addons_test.go:614: (dbg) Run:  kubectl --context addons-674449 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-674449 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-674449 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-674449 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-674449 addons disable volumesnapshots --alsologtostderr -v=1: (1.076977321s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-674449 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-674449 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.440400672s)
--- PASS: TestAddons/parallel/CSI (50.98s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (23.69s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-674449 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-674449 --alsologtostderr -v=1: (1.195151823s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-85f8f8dc54-kbn9j" [8dfed74d-34c1-4f83-90f0-a4ffb9d80a5b] Pending
helpers_test.go:352: "headlamp-85f8f8dc54-kbn9j" [8dfed74d-34c1-4f83-90f0-a4ffb9d80a5b] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
I0908 13:39:24.762771 1120875 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0908 13:39:24.762789 1120875 kapi.go:107] duration metric: took 8.736681ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
helpers_test.go:352: "headlamp-85f8f8dc54-kbn9j" [8dfed74d-34c1-4f83-90f0-a4ffb9d80a5b] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 22.006002409s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-674449 addons disable headlamp --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Headlamp (23.69s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.67s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-c55d4cb6d-gbhqv" [eae54ddf-57c9-48f2-80e8-849fb3769e6e] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.007816355s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-674449 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.67s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (10.22s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-674449 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-674449 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-674449 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-674449 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-674449 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-674449 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-674449 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-674449 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [a76ca401-edf9-411a-9181-5e6ffb56c85c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [a76ca401-edf9-411a-9181-5e6ffb56c85c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [a76ca401-edf9-411a-9181-5e6ffb56c85c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004628708s
addons_test.go:967: (dbg) Run:  kubectl --context addons-674449 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-674449 ssh "cat /opt/local-path-provisioner/pvc-b826113c-f42b-42b7-85e8-1488c168911b_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-674449 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-674449 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-674449 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (10.22s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.83s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-n676m" [fd89881d-3311-4bbe-bd0e-8609f7c85713] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.006021822s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-674449 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.83s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-vfrf9" [8ab93d34-b60c-4904-b70c-425261f38771] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.0048948s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-674449 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-674449 addons disable yakd --alsologtostderr -v=1: (6.40942155s)
--- PASS: TestAddons/parallel/Yakd (12.42s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (91.22s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-674449
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-674449: (1m30.896568642s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-674449
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-674449
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-674449
--- PASS: TestAddons/StoppedEnableDisable (91.22s)

                                                
                                    
x
+
TestCertOptions (74.81s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-110049 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-110049 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m13.581471398s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-110049 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-110049 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-110049 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-110049" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-110049
--- PASS: TestCertOptions (74.81s)

                                                
                                    
x
+
TestCertExpiration (319.91s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-001432 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-001432 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m9.772184607s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-001432 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-001432 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (1m8.851582538s)
helpers_test.go:175: Cleaning up "cert-expiration-001432" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-001432
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-001432: (1.283395956s)
--- PASS: TestCertExpiration (319.91s)

                                                
                                    
x
+
TestForceSystemdFlag (53.5s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-847393 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-847393 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (52.339675414s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-847393 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-847393" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-847393
--- PASS: TestForceSystemdFlag (53.50s)

                                                
                                    
x
+
TestForceSystemdEnv (45.53s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-962829 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-962829 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (44.605536475s)
helpers_test.go:175: Cleaning up "force-systemd-env-962829" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-962829
--- PASS: TestForceSystemdEnv (45.53s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.42s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0908 14:45:35.530815 1120875 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0908 14:45:35.530980 1120875 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0908 14:45:35.573270 1120875 install.go:62] docker-machine-driver-kvm2: exit status 1
W0908 14:45:35.573513 1120875 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0908 14:45:35.573600 1120875 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1698161429/001/docker-machine-driver-kvm2
I0908 14:45:35.824136 1120875 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1698161429/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5819440 0x5819440 0x5819440 0x5819440 0x5819440 0x5819440 0x5819440] Decompressors:map[bz2:0xc000548540 gz:0xc000548548 tar:0xc000548460 tar.bz2:0xc000548470 tar.gz:0xc0005484b0 tar.xz:0xc0005484e0 tar.zst:0xc0005484f0 tbz2:0xc000548470 tgz:0xc0005484b0 txz:0xc0005484e0 tzst:0xc0005484f0 xz:0xc000548550 zip:0xc000548560 zst:0xc000548558] Getters:map[file:0xc001804620 http:0xc001cba280 https:0xc001cba2d0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0908 14:45:35.824249 1120875 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1698161429/001/docker-machine-driver-kvm2
I0908 14:45:36.465973 1120875 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0908 14:45:36.466094 1120875 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0908 14:45:36.511725 1120875 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0908 14:45:36.511766 1120875 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0908 14:45:36.511837 1120875 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0908 14:45:36.511868 1120875 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1698161429/002/docker-machine-driver-kvm2
I0908 14:45:36.540781 1120875 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1698161429/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5819440 0x5819440 0x5819440 0x5819440 0x5819440 0x5819440 0x5819440] Decompressors:map[bz2:0xc000548540 gz:0xc000548548 tar:0xc000548460 tar.bz2:0xc000548470 tar.gz:0xc0005484b0 tar.xz:0xc0005484e0 tar.zst:0xc0005484f0 tbz2:0xc000548470 tgz:0xc0005484b0 txz:0xc0005484e0 tzst:0xc0005484f0 xz:0xc000548550 zip:0xc000548560 zst:0xc000548558] Getters:map[file:0xc001805980 http:0xc001cbb900 https:0xc001cbb950] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0908 14:45:36.540899 1120875 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1698161429/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (1.42s)

                                                
                                    
x
+
TestErrorSpam/setup (44.08s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-886918 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-886918 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-886918 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-886918 --driver=kvm2  --container-runtime=crio: (44.0832989s)
--- PASS: TestErrorSpam/setup (44.08s)

                                                
                                    
x
+
TestErrorSpam/start (0.41s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-886918 --log_dir /tmp/nospam-886918 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-886918 --log_dir /tmp/nospam-886918 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-886918 --log_dir /tmp/nospam-886918 start --dry-run
--- PASS: TestErrorSpam/start (0.41s)

                                                
                                    
x
+
TestErrorSpam/status (0.89s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-886918 --log_dir /tmp/nospam-886918 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-886918 --log_dir /tmp/nospam-886918 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-886918 --log_dir /tmp/nospam-886918 status
--- PASS: TestErrorSpam/status (0.89s)

                                                
                                    
x
+
TestErrorSpam/pause (1.89s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-886918 --log_dir /tmp/nospam-886918 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-886918 --log_dir /tmp/nospam-886918 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-886918 --log_dir /tmp/nospam-886918 pause
--- PASS: TestErrorSpam/pause (1.89s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.17s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-886918 --log_dir /tmp/nospam-886918 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-886918 --log_dir /tmp/nospam-886918 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-886918 --log_dir /tmp/nospam-886918 unpause
--- PASS: TestErrorSpam/unpause (2.17s)

                                                
                                    
x
+
TestErrorSpam/stop (93.16s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-886918 --log_dir /tmp/nospam-886918 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-886918 --log_dir /tmp/nospam-886918 stop: (1m30.95763855s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-886918 --log_dir /tmp/nospam-886918 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-886918 --log_dir /tmp/nospam-886918 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-886918 --log_dir /tmp/nospam-886918 stop: (1.296082876s)
--- PASS: TestErrorSpam/stop (93.16s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21508-1116714/.minikube/files/etc/test/nested/copy/1120875/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (95.81s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-864151 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-864151 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m35.808689269s)
--- PASS: TestFunctional/serial/StartWithProxy (95.81s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (45.57s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0908 13:47:54.001953 1120875 config.go:182] Loaded profile config "functional-864151": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-864151 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-864151 --alsologtostderr -v=8: (45.570511935s)
functional_test.go:678: soft start took 45.571454973s for "functional-864151" cluster.
I0908 13:48:39.572920 1120875 config.go:182] Loaded profile config "functional-864151": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/SoftStart (45.57s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-864151 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.52s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-864151 cache add registry.k8s.io/pause:3.1: (1.118765276s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-864151 cache add registry.k8s.io/pause:3.3: (1.176544312s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-864151 cache add registry.k8s.io/pause:latest: (1.225299823s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.52s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-864151 /tmp/TestFunctionalserialCacheCmdcacheadd_local4242710338/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 cache add minikube-local-cache-test:functional-864151
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 cache delete minikube-local-cache-test:functional-864151
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-864151
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.87s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-864151 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (256.235827ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-amd64 -p functional-864151 cache reload: (1.054242907s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.87s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 kubectl -- --context functional-864151 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-864151 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (35.84s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-864151 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0908 13:48:53.443924 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/addons-674449/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:48:53.450676 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/addons-674449/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:48:53.462319 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/addons-674449/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:48:53.483863 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/addons-674449/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:48:53.525385 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/addons-674449/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:48:53.607005 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/addons-674449/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:48:53.768720 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/addons-674449/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:48:54.090550 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/addons-674449/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:48:54.732443 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/addons-674449/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:48:56.014184 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/addons-674449/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:48:58.575846 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/addons-674449/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:49:03.698224 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/addons-674449/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:49:13.940434 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/addons-674449/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-864151 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (35.84189246s)
functional_test.go:776: restart took 35.842013662s for "functional-864151" cluster.
I0908 13:49:22.899741 1120875 config.go:182] Loaded profile config "functional-864151": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/ExtraConfig (35.84s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-864151 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.69s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-864151 logs: (1.693624473s)
--- PASS: TestFunctional/serial/LogsCmd (1.69s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.66s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 logs --file /tmp/TestFunctionalserialLogsFileCmd447344520/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-864151 logs --file /tmp/TestFunctionalserialLogsFileCmd447344520/001/logs.txt: (1.66082275s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.66s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.11s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-864151 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-864151
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-864151: exit status 115 (317.356293ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.136:30407 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-864151 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.11s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-864151 config get cpus: exit status 14 (66.735021ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-864151 config get cpus: exit status 14 (74.240525ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (42.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-864151 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-864151 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 1129453: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (42.39s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-864151 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-864151 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (166.701182ms)

                                                
                                                
-- stdout --
	* [functional-864151] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21508
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21508-1116714/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-1116714/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 13:49:41.694429 1129239 out.go:360] Setting OutFile to fd 1 ...
	I0908 13:49:41.694701 1129239 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:49:41.694714 1129239 out.go:374] Setting ErrFile to fd 2...
	I0908 13:49:41.694718 1129239 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:49:41.694975 1129239 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-1116714/.minikube/bin
	I0908 13:49:41.695621 1129239 out.go:368] Setting JSON to false
	I0908 13:49:41.696858 1129239 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":16326,"bootTime":1757323056,"procs":236,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 13:49:41.696981 1129239 start.go:140] virtualization: kvm guest
	I0908 13:49:41.699029 1129239 out.go:179] * [functional-864151] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0908 13:49:41.700859 1129239 notify.go:220] Checking for updates...
	I0908 13:49:41.700874 1129239 out.go:179]   - MINIKUBE_LOCATION=21508
	I0908 13:49:41.702408 1129239 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 13:49:41.703723 1129239 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21508-1116714/kubeconfig
	I0908 13:49:41.705257 1129239 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-1116714/.minikube
	I0908 13:49:41.706754 1129239 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 13:49:41.708212 1129239 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 13:49:41.709814 1129239 config.go:182] Loaded profile config "functional-864151": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 13:49:41.710234 1129239 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 13:49:41.710307 1129239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 13:49:41.728420 1129239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44521
	I0908 13:49:41.729006 1129239 main.go:141] libmachine: () Calling .GetVersion
	I0908 13:49:41.729739 1129239 main.go:141] libmachine: Using API Version  1
	I0908 13:49:41.729782 1129239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 13:49:41.730353 1129239 main.go:141] libmachine: () Calling .GetMachineName
	I0908 13:49:41.730635 1129239 main.go:141] libmachine: (functional-864151) Calling .DriverName
	I0908 13:49:41.730985 1129239 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 13:49:41.731446 1129239 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 13:49:41.731507 1129239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 13:49:41.749300 1129239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33969
	I0908 13:49:41.749822 1129239 main.go:141] libmachine: () Calling .GetVersion
	I0908 13:49:41.750449 1129239 main.go:141] libmachine: Using API Version  1
	I0908 13:49:41.750479 1129239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 13:49:41.750868 1129239 main.go:141] libmachine: () Calling .GetMachineName
	I0908 13:49:41.751050 1129239 main.go:141] libmachine: (functional-864151) Calling .DriverName
	I0908 13:49:41.788750 1129239 out.go:179] * Using the kvm2 driver based on existing profile
	I0908 13:49:41.790038 1129239 start.go:304] selected driver: kvm2
	I0908 13:49:41.790055 1129239 start.go:918] validating driver "kvm2" against &{Name:functional-864151 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.0 ClusterName:functional-864151 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.136 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 13:49:41.790191 1129239 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 13:49:41.792229 1129239 out.go:203] 
	W0908 13:49:41.793854 1129239 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0908 13:49:41.795157 1129239 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-864151 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-864151 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-864151 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (182.409156ms)

                                                
                                                
-- stdout --
	* [functional-864151] minikube v1.36.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21508
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21508-1116714/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-1116714/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 13:49:42.032597 1129310 out.go:360] Setting OutFile to fd 1 ...
	I0908 13:49:42.032874 1129310 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:49:42.032883 1129310 out.go:374] Setting ErrFile to fd 2...
	I0908 13:49:42.032888 1129310 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:49:42.033277 1129310 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-1116714/.minikube/bin
	I0908 13:49:42.034066 1129310 out.go:368] Setting JSON to false
	I0908 13:49:42.035475 1129310 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":16326,"bootTime":1757323056,"procs":239,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 13:49:42.035573 1129310 start.go:140] virtualization: kvm guest
	I0908 13:49:42.037094 1129310 out.go:179] * [functional-864151] minikube v1.36.0 sur Ubuntu 20.04 (kvm/amd64)
	I0908 13:49:42.038648 1129310 out.go:179]   - MINIKUBE_LOCATION=21508
	I0908 13:49:42.038646 1129310 notify.go:220] Checking for updates...
	I0908 13:49:42.040915 1129310 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 13:49:42.042253 1129310 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21508-1116714/kubeconfig
	I0908 13:49:42.043713 1129310 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-1116714/.minikube
	I0908 13:49:42.045064 1129310 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 13:49:42.046394 1129310 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 13:49:42.048282 1129310 config.go:182] Loaded profile config "functional-864151": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 13:49:42.048906 1129310 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 13:49:42.048990 1129310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 13:49:42.070521 1129310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43077
	I0908 13:49:42.071228 1129310 main.go:141] libmachine: () Calling .GetVersion
	I0908 13:49:42.071845 1129310 main.go:141] libmachine: Using API Version  1
	I0908 13:49:42.071875 1129310 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 13:49:42.072463 1129310 main.go:141] libmachine: () Calling .GetMachineName
	I0908 13:49:42.072716 1129310 main.go:141] libmachine: (functional-864151) Calling .DriverName
	I0908 13:49:42.073124 1129310 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 13:49:42.073435 1129310 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 13:49:42.073525 1129310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 13:49:42.093873 1129310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46219
	I0908 13:49:42.095579 1129310 main.go:141] libmachine: () Calling .GetVersion
	I0908 13:49:42.096669 1129310 main.go:141] libmachine: Using API Version  1
	I0908 13:49:42.096699 1129310 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 13:49:42.097141 1129310 main.go:141] libmachine: () Calling .GetMachineName
	I0908 13:49:42.097339 1129310 main.go:141] libmachine: (functional-864151) Calling .DriverName
	I0908 13:49:42.138148 1129310 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I0908 13:49:42.139406 1129310 start.go:304] selected driver: kvm2
	I0908 13:49:42.139431 1129310 start.go:918] validating driver "kvm2" against &{Name:functional-864151 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.0 ClusterName:functional-864151 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.136 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 13:49:42.139598 1129310 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 13:49:42.142212 1129310 out.go:203] 
	W0908 13:49:42.143480 1129310 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0908 13:49:42.144825 1129310 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-864151 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-864151 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-ncx2d" [ab1a0071-7ebd-4407-be74-c5386b344066] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-ncx2d" [ab1a0071-7ebd-4407-be74-c5386b344066] Running
E0908 13:49:34.421871 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/addons-674449/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.006168717s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.136:30100
functional_test.go:1680: http://192.168.39.136:30100: success! body:
Request served by hello-node-connect-7d85dfc575-ncx2d

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.136:30100
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.70s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (46.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [2e8bd920-21b8-4cd5-b5dd-30ea54b246a6] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004173991s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-864151 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-864151 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-864151 get pvc myclaim -o=json
I0908 13:49:37.869541 1120875 retry.go:31] will retry after 1.213000743s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:05d04c5c-eec2-4774-bdaa-0cfdc64b51cf ResourceVersion:769 Generation:0 CreationTimestamp:2025-09-08 13:49:37 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName:pvc-05d04c5c-eec2-4774-bdaa-0cfdc64b51cf StorageClassName:0xc001cc0a60 VolumeMode:0xc001cc0a70 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-864151 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-864151 apply -f testdata/storage-provisioner/pod.yaml
I0908 13:49:39.296092 1120875 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [8665683f-1042-4ffb-9074-0c87608fef4a] Pending
helpers_test.go:352: "sp-pod" [8665683f-1042-4ffb-9074-0c87608fef4a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [8665683f-1042-4ffb-9074-0c87608fef4a] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.006137182s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-864151 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-864151 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-864151 delete -f testdata/storage-provisioner/pod.yaml: (1.219430578s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-864151 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [e284680c-3349-4215-a77f-97c4a85ed1a3] Pending
helpers_test.go:352: "sp-pod" [e284680c-3349-4215-a77f-97c4a85ed1a3] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [e284680c-3349-4215-a77f-97c4a85ed1a3] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 23.004700485s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-864151 exec sp-pod -- ls /tmp/mount
2025/09/08 13:50:24 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (46.57s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 ssh -n functional-864151 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 cp functional-864151:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1627091443/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 ssh -n functional-864151 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 ssh -n functional-864151 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (31.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-864151 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-twq25" [3ee504e5-14d7-4c56-b98d-76a56d915874] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-twq25" [3ee504e5-14d7-4c56-b98d-76a56d915874] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 28.006052303s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-864151 exec mysql-5bb876957f-twq25 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-864151 exec mysql-5bb876957f-twq25 -- mysql -ppassword -e "show databases;": exit status 1 (174.305016ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0908 13:50:11.310414 1120875 retry.go:31] will retry after 1.287427129s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-864151 exec mysql-5bb876957f-twq25 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-864151 exec mysql-5bb876957f-twq25 -- mysql -ppassword -e "show databases;": exit status 1 (146.189028ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0908 13:50:12.744716 1120875 retry.go:31] will retry after 1.569707974s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-864151 exec mysql-5bb876957f-twq25 -- mysql -ppassword -e "show databases;"
E0908 13:50:15.383230 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/addons-674449/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/MySQL (31.59s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/1120875/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 ssh "sudo cat /etc/test/nested/copy/1120875/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/1120875.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 ssh "sudo cat /etc/ssl/certs/1120875.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/1120875.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 ssh "sudo cat /usr/share/ca-certificates/1120875.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/11208752.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 ssh "sudo cat /etc/ssl/certs/11208752.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/11208752.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 ssh "sudo cat /usr/share/ca-certificates/11208752.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-864151 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-864151 ssh "sudo systemctl is-active docker": exit status 1 (254.791412ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-864151 ssh "sudo systemctl is-active containerd": exit status 1 (265.372782ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-864151 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.95MB │
│ localhost/kicbase/echo-server           │ functional-864151  │ 9056ab77afb8e │ 4.95MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.0            │ 90550c43ad2bc │ 89.1MB │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ docker.io/library/nginx                 │ latest             │ ad5708199ec7d │ 197MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ localhost/minikube-local-cache-test     │ functional-864151  │ a89dce97a87fe │ 3.33kB │
│ registry.k8s.io/kube-proxy              │ v1.34.0            │ df0860106674d │ 73.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.0            │ 46169d968e920 │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ localhost/my-image                      │ functional-864151  │ bc432310ca41e │ 1.47MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.0            │ a0af72f2ec6d6 │ 76MB   │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-864151 image ls --format table --alsologtostderr:
I0908 13:50:10.544899 1130211 out.go:360] Setting OutFile to fd 1 ...
I0908 13:50:10.545635 1130211 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 13:50:10.545652 1130211 out.go:374] Setting ErrFile to fd 2...
I0908 13:50:10.545660 1130211 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 13:50:10.545968 1130211 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-1116714/.minikube/bin
I0908 13:50:10.546707 1130211 config.go:182] Loaded profile config "functional-864151": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 13:50:10.546856 1130211 config.go:182] Loaded profile config "functional-864151": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 13:50:10.547291 1130211 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
I0908 13:50:10.547353 1130211 main.go:141] libmachine: Launching plugin server for driver kvm2
I0908 13:50:10.564187 1130211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34899
I0908 13:50:10.564684 1130211 main.go:141] libmachine: () Calling .GetVersion
I0908 13:50:10.565262 1130211 main.go:141] libmachine: Using API Version  1
I0908 13:50:10.565288 1130211 main.go:141] libmachine: () Calling .SetConfigRaw
I0908 13:50:10.565649 1130211 main.go:141] libmachine: () Calling .GetMachineName
I0908 13:50:10.565877 1130211 main.go:141] libmachine: (functional-864151) Calling .GetState
I0908 13:50:10.568137 1130211 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
I0908 13:50:10.568199 1130211 main.go:141] libmachine: Launching plugin server for driver kvm2
I0908 13:50:10.584789 1130211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44377
I0908 13:50:10.585372 1130211 main.go:141] libmachine: () Calling .GetVersion
I0908 13:50:10.586002 1130211 main.go:141] libmachine: Using API Version  1
I0908 13:50:10.586037 1130211 main.go:141] libmachine: () Calling .SetConfigRaw
I0908 13:50:10.586483 1130211 main.go:141] libmachine: () Calling .GetMachineName
I0908 13:50:10.586732 1130211 main.go:141] libmachine: (functional-864151) Calling .DriverName
I0908 13:50:10.587042 1130211 ssh_runner.go:195] Run: systemctl --version
I0908 13:50:10.587093 1130211 main.go:141] libmachine: (functional-864151) Calling .GetSSHHostname
I0908 13:50:10.590432 1130211 main.go:141] libmachine: (functional-864151) DBG | domain functional-864151 has defined MAC address 52:54:00:9a:17:49 in network mk-functional-864151
I0908 13:50:10.590989 1130211 main.go:141] libmachine: (functional-864151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:17:49", ip: ""} in network mk-functional-864151: {Iface:virbr1 ExpiryTime:2025-09-08 14:46:34 +0000 UTC Type:0 Mac:52:54:00:9a:17:49 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:functional-864151 Clientid:01:52:54:00:9a:17:49}
I0908 13:50:10.591016 1130211 main.go:141] libmachine: (functional-864151) DBG | domain functional-864151 has defined IP address 192.168.39.136 and MAC address 52:54:00:9a:17:49 in network mk-functional-864151
I0908 13:50:10.591226 1130211 main.go:141] libmachine: (functional-864151) Calling .GetSSHPort
I0908 13:50:10.591405 1130211 main.go:141] libmachine: (functional-864151) Calling .GetSSHKeyPath
I0908 13:50:10.591609 1130211 main.go:141] libmachine: (functional-864151) Calling .GetSSHUsername
I0908 13:50:10.591820 1130211 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/functional-864151/id_rsa Username:docker}
I0908 13:50:10.723293 1130211 ssh_runner.go:195] Run: sudo crictl images --output json
I0908 13:50:10.845561 1130211 main.go:141] libmachine: Making call to close driver server
I0908 13:50:10.845590 1130211 main.go:141] libmachine: (functional-864151) Calling .Close
I0908 13:50:10.846028 1130211 main.go:141] libmachine: Successfully made call to close driver server
I0908 13:50:10.846049 1130211 main.go:141] libmachine: Making call to close connection to plugin binary
I0908 13:50:10.846057 1130211 main.go:141] libmachine: Making call to close driver server
I0908 13:50:10.846065 1130211 main.go:141] libmachine: (functional-864151) Calling .Close
I0908 13:50:10.846065 1130211 main.go:141] libmachine: (functional-864151) DBG | Closing plugin on server side
I0908 13:50:10.846330 1130211 main.go:141] libmachine: Successfully made call to close driver server
I0908 13:50:10.846343 1130211 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 image ls --format json --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-864151 image ls --format json --alsologtostderr: (1.126070069s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-864151 image ls --format json --alsologtostderr:
[{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"a89dce97a87fe015ed5a869db0cd84c3f5e4d5fba51eba19f0c77d267d329ede","repoDigests":["localhost/minikube-local-cache-test@sha256:4331e912af40098f32dc4f4a2e0760b7aa3063f6d8a4455882b5e73fc762f4eb"],"repoTags":["localhost/minikube-local-cache-test:functional-864151"],"size":"3330"},{"id":"df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce","repoDigests":["registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067","registry.k8s.io/kube-proxy@sha256:5f71731a5eadcf74f3997dfc159bf5ca36e48c3387c19082fc21871e0dbb19af"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.0"],"size":"73138071"},
{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"5954bc6589ff0de04066a370858e38105801a92dad672a0ca366163e459a5641","repoDigests":["docker.io/library/ff5701b2764e15467854f9cdc31715ccdbc2609325af19fe5b45fc2ed17474fb-tmp@sha256:1063b120d7edb9aeda1d02032b8a1a4c4d6033c2a0977f2c6f850a283d10ddc4"],"repoTags":[],"size":"1466018"},{"id":"a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:82ea603ed3cce63f9f870f22299741e0011318391cf722dd924a1d5a9f8ce6f6","registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.0"],"size":"76004183"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8
s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013
e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc9
63e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"ad5708199ec7d169c6837fe46e1646603d0f7d0a0f54d3cd8d07bc1c818d0224","repoDigests":["docker.io/library/nginx@sha256:33e0bbc7ca9ecf108140af6288c7c9d1ecc77548cbfd3952fd8466a75edefe57","docker.io/library/nginx@sha256:f15190cd0aed34df2541e6a569d349858dd83fe2a519d7c0ec023133b6d3c4f7"],"repoTags":["docker.io/library/nginx:latest"],"size":"196544386"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gc
r.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"bc432310ca41e92ce315ff7b417e60e1a011341c899d9c2a3dbace9c80e01966","repoDigests":["localhost/my-image@sha256:dc375f0d4d0e9cc5b7bcd8238c4bad30285fa351fcf4716e73330764e836daf4"],"repoTags":["localhost/my-image:functional-864151"],"size":"1468600"},{"id":"90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90","repoDigests":["registry.k8s.io/kube-apiserver@sha256:495d3253a47a9a64a62041d518678c8b101fb628488e729d9f52ddff7cf82a86","registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.0"],"size":"89050097"},{"id":"46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc","repoDigests":["registry
.k8s.io/kube-scheduler@sha256:31b77e40d737b6d3e3b19b4afd681c9362aef06353075895452fc9a41fe34140","registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.0"],"size":"53844823"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","l
ocalhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-864151"],"size":"4945246"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-864151 image ls --format json --alsologtostderr:
I0908 13:50:09.414258 1130172 out.go:360] Setting OutFile to fd 1 ...
I0908 13:50:09.414429 1130172 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 13:50:09.414442 1130172 out.go:374] Setting ErrFile to fd 2...
I0908 13:50:09.414449 1130172 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 13:50:09.414699 1130172 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-1116714/.minikube/bin
I0908 13:50:09.415290 1130172 config.go:182] Loaded profile config "functional-864151": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 13:50:09.415402 1130172 config.go:182] Loaded profile config "functional-864151": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 13:50:09.416626 1130172 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
I0908 13:50:09.416754 1130172 main.go:141] libmachine: Launching plugin server for driver kvm2
I0908 13:50:09.433913 1130172 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36489
I0908 13:50:09.434597 1130172 main.go:141] libmachine: () Calling .GetVersion
I0908 13:50:09.435273 1130172 main.go:141] libmachine: Using API Version  1
I0908 13:50:09.435301 1130172 main.go:141] libmachine: () Calling .SetConfigRaw
I0908 13:50:09.435883 1130172 main.go:141] libmachine: () Calling .GetMachineName
I0908 13:50:09.436124 1130172 main.go:141] libmachine: (functional-864151) Calling .GetState
I0908 13:50:09.438973 1130172 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
I0908 13:50:09.439041 1130172 main.go:141] libmachine: Launching plugin server for driver kvm2
I0908 13:50:09.455866 1130172 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40811
I0908 13:50:09.456495 1130172 main.go:141] libmachine: () Calling .GetVersion
I0908 13:50:09.457144 1130172 main.go:141] libmachine: Using API Version  1
I0908 13:50:09.457178 1130172 main.go:141] libmachine: () Calling .SetConfigRaw
I0908 13:50:09.457600 1130172 main.go:141] libmachine: () Calling .GetMachineName
I0908 13:50:09.457885 1130172 main.go:141] libmachine: (functional-864151) Calling .DriverName
I0908 13:50:09.458164 1130172 ssh_runner.go:195] Run: systemctl --version
I0908 13:50:09.458201 1130172 main.go:141] libmachine: (functional-864151) Calling .GetSSHHostname
I0908 13:50:09.461797 1130172 main.go:141] libmachine: (functional-864151) DBG | domain functional-864151 has defined MAC address 52:54:00:9a:17:49 in network mk-functional-864151
I0908 13:50:09.462575 1130172 main.go:141] libmachine: (functional-864151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:17:49", ip: ""} in network mk-functional-864151: {Iface:virbr1 ExpiryTime:2025-09-08 14:46:34 +0000 UTC Type:0 Mac:52:54:00:9a:17:49 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:functional-864151 Clientid:01:52:54:00:9a:17:49}
I0908 13:50:09.462622 1130172 main.go:141] libmachine: (functional-864151) DBG | domain functional-864151 has defined IP address 192.168.39.136 and MAC address 52:54:00:9a:17:49 in network mk-functional-864151
I0908 13:50:09.462896 1130172 main.go:141] libmachine: (functional-864151) Calling .GetSSHPort
I0908 13:50:09.463168 1130172 main.go:141] libmachine: (functional-864151) Calling .GetSSHKeyPath
I0908 13:50:09.463409 1130172 main.go:141] libmachine: (functional-864151) Calling .GetSSHUsername
I0908 13:50:09.463608 1130172 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/functional-864151/id_rsa Username:docker}
I0908 13:50:09.597785 1130172 ssh_runner.go:195] Run: sudo crictl images --output json
I0908 13:50:10.478956 1130172 main.go:141] libmachine: Making call to close driver server
I0908 13:50:10.478970 1130172 main.go:141] libmachine: (functional-864151) Calling .Close
I0908 13:50:10.479349 1130172 main.go:141] libmachine: Successfully made call to close driver server
I0908 13:50:10.479389 1130172 main.go:141] libmachine: (functional-864151) DBG | Closing plugin on server side
I0908 13:50:10.479414 1130172 main.go:141] libmachine: Making call to close connection to plugin binary
I0908 13:50:10.479437 1130172 main.go:141] libmachine: Making call to close driver server
I0908 13:50:10.479450 1130172 main.go:141] libmachine: (functional-864151) Calling .Close
I0908 13:50:10.479717 1130172 main.go:141] libmachine: Successfully made call to close driver server
I0908 13:50:10.479735 1130172 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-864151
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 image load --daemon kicbase/echo-server:functional-864151 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-864151 image load --daemon kicbase/echo-server:functional-864151 --alsologtostderr: (1.631126658s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-864151 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-864151 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-dxfk6" [207aed6b-bbb8-4a59-a91f-fae27fde2ea4] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-dxfk6" [207aed6b-bbb8-4a59-a91f-fae27fde2ea4] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.007494182s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 image load --daemon kicbase/echo-server:functional-864151 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-864151
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 image load --daemon kicbase/echo-server:functional-864151 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 image save kicbase/echo-server:functional-864151 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 image rm kicbase/echo-server:functional-864151 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-864151
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 image save --daemon kicbase/echo-server:functional-864151 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-864151
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (14.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-864151 /tmp/TestFunctionalparallelMountCmdany-port2718512174/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1757339379910035926" to /tmp/TestFunctionalparallelMountCmdany-port2718512174/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1757339379910035926" to /tmp/TestFunctionalparallelMountCmdany-port2718512174/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1757339379910035926" to /tmp/TestFunctionalparallelMountCmdany-port2718512174/001/test-1757339379910035926
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-864151 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (258.308422ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0908 13:49:40.168679 1120875 retry.go:31] will retry after 361.164248ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep  8 13:49 created-by-test
-rw-r--r-- 1 docker docker 24 Sep  8 13:49 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep  8 13:49 test-1757339379910035926
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 ssh cat /mount-9p/test-1757339379910035926
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-864151 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [70925308-9219-463e-964f-5db0fcce2c4f] Pending
helpers_test.go:352: "busybox-mount" [70925308-9219-463e-964f-5db0fcce2c4f] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [70925308-9219-463e-964f-5db0fcce2c4f] Running
helpers_test.go:352: "busybox-mount" [70925308-9219-463e-964f-5db0fcce2c4f] Running / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [70925308-9219-463e-964f-5db0fcce2c4f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 12.010473558s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-864151 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-864151 /tmp/TestFunctionalparallelMountCmdany-port2718512174/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (14.95s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "391.042067ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "66.597928ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 service list -o json
functional_test.go:1504: Took "460.604419ms" to run "out/minikube-linux-amd64 -p functional-864151 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "473.945212ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "57.362677ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.136:31103
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.136:31103
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-864151 /tmp/TestFunctionalparallelMountCmdspecific-port3947269840/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 ssh "findmnt -T /mount-9p | grep 9p"
I0908 13:49:55.004570 1120875 detect.go:223] nested VM detected
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-864151 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (272.42406ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0908 13:49:55.131471 1120875 retry.go:31] will retry after 597.098651ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-864151 /tmp/TestFunctionalparallelMountCmdspecific-port3947269840/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-864151 ssh "sudo umount -f /mount-9p": exit status 1 (248.691695ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-864151 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-864151 /tmp/TestFunctionalparallelMountCmdspecific-port3947269840/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.05s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-864151 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3001516266/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-864151 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3001516266/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-864151 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3001516266/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-864151 ssh "findmnt -T" /mount1: exit status 1 (400.769561ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0908 13:49:57.306709 1120875 retry.go:31] will retry after 578.231903ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-864151 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-864151 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-864151 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3001516266/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-864151 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3001516266/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-864151 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3001516266/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.74s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-864151
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-864151
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-864151
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (275.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E0908 13:51:37.305010 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/addons-674449/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:53:53.441750 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/addons-674449/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:54:21.147105 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/addons-674449/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:54:31.163326 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/functional-864151/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:54:31.169806 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/functional-864151/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:54:31.181321 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/functional-864151/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:54:31.202916 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/functional-864151/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:54:31.244475 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/functional-864151/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:54:31.326084 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/functional-864151/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:54:31.487752 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/functional-864151/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:54:31.809538 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/functional-864151/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:54:32.450998 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/functional-864151/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:54:33.733407 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/functional-864151/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:54:36.295285 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/functional-864151/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:54:41.417031 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/functional-864151/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:54:51.658456 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/functional-864151/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-385528 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (4m34.272754928s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (275.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-385528 kubectl -- rollout status deployment/busybox: (3.850440593s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 kubectl -- exec busybox-7b57f96db7-969b8 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 kubectl -- exec busybox-7b57f96db7-lmh6d -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 kubectl -- exec busybox-7b57f96db7-qtvqn -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 kubectl -- exec busybox-7b57f96db7-969b8 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 kubectl -- exec busybox-7b57f96db7-lmh6d -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 kubectl -- exec busybox-7b57f96db7-qtvqn -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 kubectl -- exec busybox-7b57f96db7-969b8 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 kubectl -- exec busybox-7b57f96db7-lmh6d -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 kubectl -- exec busybox-7b57f96db7-qtvqn -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 kubectl -- exec busybox-7b57f96db7-969b8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 kubectl -- exec busybox-7b57f96db7-969b8 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 kubectl -- exec busybox-7b57f96db7-lmh6d -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 kubectl -- exec busybox-7b57f96db7-lmh6d -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 kubectl -- exec busybox-7b57f96db7-qtvqn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 kubectl -- exec busybox-7b57f96db7-qtvqn -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (57.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 node add --alsologtostderr -v 5
E0908 13:55:12.140782 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/functional-864151/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:55:53.103860 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/functional-864151/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-385528 node add --alsologtostderr -v 5: (56.784695594s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (57.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-385528 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.029490651s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (14.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-amd64 -p ha-385528 status --output json --alsologtostderr -v 5: (1.004653921s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 cp testdata/cp-test.txt ha-385528:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 ssh -n ha-385528 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 cp ha-385528:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile303937128/001/cp-test_ha-385528.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 ssh -n ha-385528 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 cp ha-385528:/home/docker/cp-test.txt ha-385528-m02:/home/docker/cp-test_ha-385528_ha-385528-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 ssh -n ha-385528 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 ssh -n ha-385528-m02 "sudo cat /home/docker/cp-test_ha-385528_ha-385528-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 cp ha-385528:/home/docker/cp-test.txt ha-385528-m03:/home/docker/cp-test_ha-385528_ha-385528-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 ssh -n ha-385528 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 ssh -n ha-385528-m03 "sudo cat /home/docker/cp-test_ha-385528_ha-385528-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 cp ha-385528:/home/docker/cp-test.txt ha-385528-m04:/home/docker/cp-test_ha-385528_ha-385528-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 ssh -n ha-385528 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 ssh -n ha-385528-m04 "sudo cat /home/docker/cp-test_ha-385528_ha-385528-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 cp testdata/cp-test.txt ha-385528-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 ssh -n ha-385528-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 cp ha-385528-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile303937128/001/cp-test_ha-385528-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 ssh -n ha-385528-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 cp ha-385528-m02:/home/docker/cp-test.txt ha-385528:/home/docker/cp-test_ha-385528-m02_ha-385528.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 ssh -n ha-385528-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 ssh -n ha-385528 "sudo cat /home/docker/cp-test_ha-385528-m02_ha-385528.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 cp ha-385528-m02:/home/docker/cp-test.txt ha-385528-m03:/home/docker/cp-test_ha-385528-m02_ha-385528-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 ssh -n ha-385528-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 ssh -n ha-385528-m03 "sudo cat /home/docker/cp-test_ha-385528-m02_ha-385528-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 cp ha-385528-m02:/home/docker/cp-test.txt ha-385528-m04:/home/docker/cp-test_ha-385528-m02_ha-385528-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 ssh -n ha-385528-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 ssh -n ha-385528-m04 "sudo cat /home/docker/cp-test_ha-385528-m02_ha-385528-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 cp testdata/cp-test.txt ha-385528-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 ssh -n ha-385528-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 cp ha-385528-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile303937128/001/cp-test_ha-385528-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 ssh -n ha-385528-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 cp ha-385528-m03:/home/docker/cp-test.txt ha-385528:/home/docker/cp-test_ha-385528-m03_ha-385528.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 ssh -n ha-385528-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 ssh -n ha-385528 "sudo cat /home/docker/cp-test_ha-385528-m03_ha-385528.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 cp ha-385528-m03:/home/docker/cp-test.txt ha-385528-m02:/home/docker/cp-test_ha-385528-m03_ha-385528-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 ssh -n ha-385528-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 ssh -n ha-385528-m02 "sudo cat /home/docker/cp-test_ha-385528-m03_ha-385528-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 cp ha-385528-m03:/home/docker/cp-test.txt ha-385528-m04:/home/docker/cp-test_ha-385528-m03_ha-385528-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 ssh -n ha-385528-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 ssh -n ha-385528-m04 "sudo cat /home/docker/cp-test_ha-385528-m03_ha-385528-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 cp testdata/cp-test.txt ha-385528-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 ssh -n ha-385528-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 cp ha-385528-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile303937128/001/cp-test_ha-385528-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 ssh -n ha-385528-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 cp ha-385528-m04:/home/docker/cp-test.txt ha-385528:/home/docker/cp-test_ha-385528-m04_ha-385528.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 ssh -n ha-385528-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 ssh -n ha-385528 "sudo cat /home/docker/cp-test_ha-385528-m04_ha-385528.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 cp ha-385528-m04:/home/docker/cp-test.txt ha-385528-m02:/home/docker/cp-test_ha-385528-m04_ha-385528-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 ssh -n ha-385528-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 ssh -n ha-385528-m02 "sudo cat /home/docker/cp-test_ha-385528-m04_ha-385528-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 cp ha-385528-m04:/home/docker/cp-test.txt ha-385528-m03:/home/docker/cp-test_ha-385528-m04_ha-385528-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 ssh -n ha-385528-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 ssh -n ha-385528-m03 "sudo cat /home/docker/cp-test_ha-385528-m04_ha-385528-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (14.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (91.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 node stop m02 --alsologtostderr -v 5
E0908 13:57:15.026100 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/functional-864151/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-385528 node stop m02 --alsologtostderr -v 5: (1m30.728457378s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-385528 status --alsologtostderr -v 5: exit status 7 (766.299825ms)

                                                
                                                
-- stdout --
	ha-385528
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-385528-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-385528-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-385528-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 13:57:53.001766 1135177 out.go:360] Setting OutFile to fd 1 ...
	I0908 13:57:53.002064 1135177 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:57:53.002075 1135177 out.go:374] Setting ErrFile to fd 2...
	I0908 13:57:53.002079 1135177 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:57:53.002322 1135177 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-1116714/.minikube/bin
	I0908 13:57:53.002523 1135177 out.go:368] Setting JSON to false
	I0908 13:57:53.002558 1135177 mustload.go:65] Loading cluster: ha-385528
	I0908 13:57:53.002731 1135177 notify.go:220] Checking for updates...
	I0908 13:57:53.002928 1135177 config.go:182] Loaded profile config "ha-385528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 13:57:53.002951 1135177 status.go:174] checking status of ha-385528 ...
	I0908 13:57:53.003512 1135177 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 13:57:53.003593 1135177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 13:57:53.023081 1135177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37187
	I0908 13:57:53.023625 1135177 main.go:141] libmachine: () Calling .GetVersion
	I0908 13:57:53.024267 1135177 main.go:141] libmachine: Using API Version  1
	I0908 13:57:53.024291 1135177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 13:57:53.024729 1135177 main.go:141] libmachine: () Calling .GetMachineName
	I0908 13:57:53.024949 1135177 main.go:141] libmachine: (ha-385528) Calling .GetState
	I0908 13:57:53.026759 1135177 status.go:371] ha-385528 host status = "Running" (err=<nil>)
	I0908 13:57:53.026782 1135177 host.go:66] Checking if "ha-385528" exists ...
	I0908 13:57:53.027099 1135177 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 13:57:53.027144 1135177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 13:57:53.044207 1135177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40013
	I0908 13:57:53.044799 1135177 main.go:141] libmachine: () Calling .GetVersion
	I0908 13:57:53.045400 1135177 main.go:141] libmachine: Using API Version  1
	I0908 13:57:53.045451 1135177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 13:57:53.045833 1135177 main.go:141] libmachine: () Calling .GetMachineName
	I0908 13:57:53.046033 1135177 main.go:141] libmachine: (ha-385528) Calling .GetIP
	I0908 13:57:53.049592 1135177 main.go:141] libmachine: (ha-385528) DBG | domain ha-385528 has defined MAC address 52:54:00:0c:73:9e in network mk-ha-385528
	I0908 13:57:53.050074 1135177 main.go:141] libmachine: (ha-385528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:73:9e", ip: ""} in network mk-ha-385528: {Iface:virbr1 ExpiryTime:2025-09-08 14:50:41 +0000 UTC Type:0 Mac:52:54:00:0c:73:9e Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-385528 Clientid:01:52:54:00:0c:73:9e}
	I0908 13:57:53.050109 1135177 main.go:141] libmachine: (ha-385528) DBG | domain ha-385528 has defined IP address 192.168.39.55 and MAC address 52:54:00:0c:73:9e in network mk-ha-385528
	I0908 13:57:53.050367 1135177 host.go:66] Checking if "ha-385528" exists ...
	I0908 13:57:53.050801 1135177 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 13:57:53.050856 1135177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 13:57:53.067410 1135177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36263
	I0908 13:57:53.068044 1135177 main.go:141] libmachine: () Calling .GetVersion
	I0908 13:57:53.068564 1135177 main.go:141] libmachine: Using API Version  1
	I0908 13:57:53.068584 1135177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 13:57:53.068909 1135177 main.go:141] libmachine: () Calling .GetMachineName
	I0908 13:57:53.069101 1135177 main.go:141] libmachine: (ha-385528) Calling .DriverName
	I0908 13:57:53.069363 1135177 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 13:57:53.069408 1135177 main.go:141] libmachine: (ha-385528) Calling .GetSSHHostname
	I0908 13:57:53.072796 1135177 main.go:141] libmachine: (ha-385528) DBG | domain ha-385528 has defined MAC address 52:54:00:0c:73:9e in network mk-ha-385528
	I0908 13:57:53.073420 1135177 main.go:141] libmachine: (ha-385528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:73:9e", ip: ""} in network mk-ha-385528: {Iface:virbr1 ExpiryTime:2025-09-08 14:50:41 +0000 UTC Type:0 Mac:52:54:00:0c:73:9e Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-385528 Clientid:01:52:54:00:0c:73:9e}
	I0908 13:57:53.073449 1135177 main.go:141] libmachine: (ha-385528) DBG | domain ha-385528 has defined IP address 192.168.39.55 and MAC address 52:54:00:0c:73:9e in network mk-ha-385528
	I0908 13:57:53.073659 1135177 main.go:141] libmachine: (ha-385528) Calling .GetSSHPort
	I0908 13:57:53.073869 1135177 main.go:141] libmachine: (ha-385528) Calling .GetSSHKeyPath
	I0908 13:57:53.074075 1135177 main.go:141] libmachine: (ha-385528) Calling .GetSSHUsername
	I0908 13:57:53.074265 1135177 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/ha-385528/id_rsa Username:docker}
	I0908 13:57:53.163077 1135177 ssh_runner.go:195] Run: systemctl --version
	I0908 13:57:53.171860 1135177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 13:57:53.193546 1135177 kubeconfig.go:125] found "ha-385528" server: "https://192.168.39.254:8443"
	I0908 13:57:53.193591 1135177 api_server.go:166] Checking apiserver status ...
	I0908 13:57:53.193633 1135177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 13:57:53.218334 1135177 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1406/cgroup
	W0908 13:57:53.232430 1135177 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1406/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0908 13:57:53.232520 1135177 ssh_runner.go:195] Run: ls
	I0908 13:57:53.241068 1135177 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0908 13:57:53.247256 1135177 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0908 13:57:53.247288 1135177 status.go:463] ha-385528 apiserver status = Running (err=<nil>)
	I0908 13:57:53.247300 1135177 status.go:176] ha-385528 status: &{Name:ha-385528 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 13:57:53.247320 1135177 status.go:174] checking status of ha-385528-m02 ...
	I0908 13:57:53.247713 1135177 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 13:57:53.247768 1135177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 13:57:53.265239 1135177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36895
	I0908 13:57:53.265910 1135177 main.go:141] libmachine: () Calling .GetVersion
	I0908 13:57:53.266541 1135177 main.go:141] libmachine: Using API Version  1
	I0908 13:57:53.266578 1135177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 13:57:53.266990 1135177 main.go:141] libmachine: () Calling .GetMachineName
	I0908 13:57:53.267190 1135177 main.go:141] libmachine: (ha-385528-m02) Calling .GetState
	I0908 13:57:53.269084 1135177 status.go:371] ha-385528-m02 host status = "Stopped" (err=<nil>)
	I0908 13:57:53.269106 1135177 status.go:384] host is not running, skipping remaining checks
	I0908 13:57:53.269114 1135177 status.go:176] ha-385528-m02 status: &{Name:ha-385528-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 13:57:53.269141 1135177 status.go:174] checking status of ha-385528-m03 ...
	I0908 13:57:53.269514 1135177 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 13:57:53.269566 1135177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 13:57:53.286797 1135177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43561
	I0908 13:57:53.287344 1135177 main.go:141] libmachine: () Calling .GetVersion
	I0908 13:57:53.287919 1135177 main.go:141] libmachine: Using API Version  1
	I0908 13:57:53.287972 1135177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 13:57:53.288443 1135177 main.go:141] libmachine: () Calling .GetMachineName
	I0908 13:57:53.288651 1135177 main.go:141] libmachine: (ha-385528-m03) Calling .GetState
	I0908 13:57:53.290489 1135177 status.go:371] ha-385528-m03 host status = "Running" (err=<nil>)
	I0908 13:57:53.290512 1135177 host.go:66] Checking if "ha-385528-m03" exists ...
	I0908 13:57:53.290814 1135177 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 13:57:53.290871 1135177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 13:57:53.308326 1135177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43509
	I0908 13:57:53.308899 1135177 main.go:141] libmachine: () Calling .GetVersion
	I0908 13:57:53.309466 1135177 main.go:141] libmachine: Using API Version  1
	I0908 13:57:53.309492 1135177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 13:57:53.309845 1135177 main.go:141] libmachine: () Calling .GetMachineName
	I0908 13:57:53.310074 1135177 main.go:141] libmachine: (ha-385528-m03) Calling .GetIP
	I0908 13:57:53.313343 1135177 main.go:141] libmachine: (ha-385528-m03) DBG | domain ha-385528-m03 has defined MAC address 52:54:00:ce:a6:10 in network mk-ha-385528
	I0908 13:57:53.313902 1135177 main.go:141] libmachine: (ha-385528-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:a6:10", ip: ""} in network mk-ha-385528: {Iface:virbr1 ExpiryTime:2025-09-08 14:53:21 +0000 UTC Type:0 Mac:52:54:00:ce:a6:10 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:ha-385528-m03 Clientid:01:52:54:00:ce:a6:10}
	I0908 13:57:53.313928 1135177 main.go:141] libmachine: (ha-385528-m03) DBG | domain ha-385528-m03 has defined IP address 192.168.39.36 and MAC address 52:54:00:ce:a6:10 in network mk-ha-385528
	I0908 13:57:53.314142 1135177 host.go:66] Checking if "ha-385528-m03" exists ...
	I0908 13:57:53.314600 1135177 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 13:57:53.314661 1135177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 13:57:53.331503 1135177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34603
	I0908 13:57:53.332099 1135177 main.go:141] libmachine: () Calling .GetVersion
	I0908 13:57:53.332627 1135177 main.go:141] libmachine: Using API Version  1
	I0908 13:57:53.332650 1135177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 13:57:53.333067 1135177 main.go:141] libmachine: () Calling .GetMachineName
	I0908 13:57:53.333343 1135177 main.go:141] libmachine: (ha-385528-m03) Calling .DriverName
	I0908 13:57:53.333549 1135177 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 13:57:53.333575 1135177 main.go:141] libmachine: (ha-385528-m03) Calling .GetSSHHostname
	I0908 13:57:53.337610 1135177 main.go:141] libmachine: (ha-385528-m03) DBG | domain ha-385528-m03 has defined MAC address 52:54:00:ce:a6:10 in network mk-ha-385528
	I0908 13:57:53.338154 1135177 main.go:141] libmachine: (ha-385528-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:a6:10", ip: ""} in network mk-ha-385528: {Iface:virbr1 ExpiryTime:2025-09-08 14:53:21 +0000 UTC Type:0 Mac:52:54:00:ce:a6:10 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:ha-385528-m03 Clientid:01:52:54:00:ce:a6:10}
	I0908 13:57:53.338190 1135177 main.go:141] libmachine: (ha-385528-m03) DBG | domain ha-385528-m03 has defined IP address 192.168.39.36 and MAC address 52:54:00:ce:a6:10 in network mk-ha-385528
	I0908 13:57:53.338386 1135177 main.go:141] libmachine: (ha-385528-m03) Calling .GetSSHPort
	I0908 13:57:53.338622 1135177 main.go:141] libmachine: (ha-385528-m03) Calling .GetSSHKeyPath
	I0908 13:57:53.338837 1135177 main.go:141] libmachine: (ha-385528-m03) Calling .GetSSHUsername
	I0908 13:57:53.338992 1135177 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/ha-385528-m03/id_rsa Username:docker}
	I0908 13:57:53.439288 1135177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 13:57:53.464447 1135177 kubeconfig.go:125] found "ha-385528" server: "https://192.168.39.254:8443"
	I0908 13:57:53.464481 1135177 api_server.go:166] Checking apiserver status ...
	I0908 13:57:53.464528 1135177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 13:57:53.492922 1135177 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1803/cgroup
	W0908 13:57:53.509582 1135177 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1803/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0908 13:57:53.509651 1135177 ssh_runner.go:195] Run: ls
	I0908 13:57:53.515808 1135177 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0908 13:57:53.523283 1135177 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0908 13:57:53.523317 1135177 status.go:463] ha-385528-m03 apiserver status = Running (err=<nil>)
	I0908 13:57:53.523328 1135177 status.go:176] ha-385528-m03 status: &{Name:ha-385528-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 13:57:53.523350 1135177 status.go:174] checking status of ha-385528-m04 ...
	I0908 13:57:53.523804 1135177 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 13:57:53.523863 1135177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 13:57:53.540255 1135177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38625
	I0908 13:57:53.540891 1135177 main.go:141] libmachine: () Calling .GetVersion
	I0908 13:57:53.541418 1135177 main.go:141] libmachine: Using API Version  1
	I0908 13:57:53.541440 1135177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 13:57:53.541822 1135177 main.go:141] libmachine: () Calling .GetMachineName
	I0908 13:57:53.542137 1135177 main.go:141] libmachine: (ha-385528-m04) Calling .GetState
	I0908 13:57:53.543978 1135177 status.go:371] ha-385528-m04 host status = "Running" (err=<nil>)
	I0908 13:57:53.544004 1135177 host.go:66] Checking if "ha-385528-m04" exists ...
	I0908 13:57:53.544402 1135177 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 13:57:53.544451 1135177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 13:57:53.561248 1135177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41793
	I0908 13:57:53.561803 1135177 main.go:141] libmachine: () Calling .GetVersion
	I0908 13:57:53.562365 1135177 main.go:141] libmachine: Using API Version  1
	I0908 13:57:53.562392 1135177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 13:57:53.562756 1135177 main.go:141] libmachine: () Calling .GetMachineName
	I0908 13:57:53.562999 1135177 main.go:141] libmachine: (ha-385528-m04) Calling .GetIP
	I0908 13:57:53.566107 1135177 main.go:141] libmachine: (ha-385528-m04) DBG | domain ha-385528-m04 has defined MAC address 52:54:00:88:55:24 in network mk-ha-385528
	I0908 13:57:53.566690 1135177 main.go:141] libmachine: (ha-385528-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:55:24", ip: ""} in network mk-ha-385528: {Iface:virbr1 ExpiryTime:2025-09-08 14:55:29 +0000 UTC Type:0 Mac:52:54:00:88:55:24 Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-385528-m04 Clientid:01:52:54:00:88:55:24}
	I0908 13:57:53.566727 1135177 main.go:141] libmachine: (ha-385528-m04) DBG | domain ha-385528-m04 has defined IP address 192.168.39.83 and MAC address 52:54:00:88:55:24 in network mk-ha-385528
	I0908 13:57:53.566921 1135177 host.go:66] Checking if "ha-385528-m04" exists ...
	I0908 13:57:53.567238 1135177 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 13:57:53.567284 1135177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 13:57:53.584041 1135177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33915
	I0908 13:57:53.584723 1135177 main.go:141] libmachine: () Calling .GetVersion
	I0908 13:57:53.585417 1135177 main.go:141] libmachine: Using API Version  1
	I0908 13:57:53.585446 1135177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 13:57:53.585839 1135177 main.go:141] libmachine: () Calling .GetMachineName
	I0908 13:57:53.586049 1135177 main.go:141] libmachine: (ha-385528-m04) Calling .DriverName
	I0908 13:57:53.586310 1135177 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 13:57:53.586340 1135177 main.go:141] libmachine: (ha-385528-m04) Calling .GetSSHHostname
	I0908 13:57:53.589528 1135177 main.go:141] libmachine: (ha-385528-m04) DBG | domain ha-385528-m04 has defined MAC address 52:54:00:88:55:24 in network mk-ha-385528
	I0908 13:57:53.589981 1135177 main.go:141] libmachine: (ha-385528-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:55:24", ip: ""} in network mk-ha-385528: {Iface:virbr1 ExpiryTime:2025-09-08 14:55:29 +0000 UTC Type:0 Mac:52:54:00:88:55:24 Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-385528-m04 Clientid:01:52:54:00:88:55:24}
	I0908 13:57:53.590017 1135177 main.go:141] libmachine: (ha-385528-m04) DBG | domain ha-385528-m04 has defined IP address 192.168.39.83 and MAC address 52:54:00:88:55:24 in network mk-ha-385528
	I0908 13:57:53.590262 1135177 main.go:141] libmachine: (ha-385528-m04) Calling .GetSSHPort
	I0908 13:57:53.590488 1135177 main.go:141] libmachine: (ha-385528-m04) Calling .GetSSHKeyPath
	I0908 13:57:53.590622 1135177 main.go:141] libmachine: (ha-385528-m04) Calling .GetSSHUsername
	I0908 13:57:53.590757 1135177 sshutil.go:53] new ssh client: &{IP:192.168.39.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/ha-385528-m04/id_rsa Username:docker}
	I0908 13:57:53.683044 1135177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 13:57:53.708401 1135177 status.go:176] ha-385528-m04 status: &{Name:ha-385528-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (91.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (37.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-385528 node start m02 --alsologtostderr -v 5: (36.541979343s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-385528 status --alsologtostderr -v 5: (1.033000863s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (37.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.045587242s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (472.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 stop --alsologtostderr -v 5
E0908 13:58:53.440657 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/addons-674449/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:59:31.163745 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/functional-864151/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:59:58.868270 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/functional-864151/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-385528 stop --alsologtostderr -v 5: (4m34.738575022s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 start --wait true --alsologtostderr -v 5
E0908 14:03:53.440041 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/addons-674449/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:04:31.163601 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/functional-864151/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:05:16.509563 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/addons-674449/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-385528 start --wait true --alsologtostderr -v 5: (3m18.030250587s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (472.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (19.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-385528 node delete m03 --alsologtostderr -v 5: (18.427475129s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (19.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (272.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 stop --alsologtostderr -v 5
E0908 14:08:53.441042 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/addons-674449/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:09:31.163181 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/functional-864151/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:10:54.230203 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/functional-864151/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-385528 stop --alsologtostderr -v 5: (4m32.779238523s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-385528 status --alsologtostderr -v 5: exit status 7 (125.94156ms)

                                                
                                                
-- stdout --
	ha-385528
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-385528-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-385528-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 14:11:18.973745 1139355 out.go:360] Setting OutFile to fd 1 ...
	I0908 14:11:18.974034 1139355 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 14:11:18.974046 1139355 out.go:374] Setting ErrFile to fd 2...
	I0908 14:11:18.974050 1139355 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 14:11:18.974265 1139355 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-1116714/.minikube/bin
	I0908 14:11:18.974483 1139355 out.go:368] Setting JSON to false
	I0908 14:11:18.974521 1139355 mustload.go:65] Loading cluster: ha-385528
	I0908 14:11:18.974697 1139355 notify.go:220] Checking for updates...
	I0908 14:11:18.975071 1139355 config.go:182] Loaded profile config "ha-385528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 14:11:18.975106 1139355 status.go:174] checking status of ha-385528 ...
	I0908 14:11:18.975762 1139355 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 14:11:18.975817 1139355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 14:11:18.999204 1139355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44815
	I0908 14:11:18.999859 1139355 main.go:141] libmachine: () Calling .GetVersion
	I0908 14:11:19.000560 1139355 main.go:141] libmachine: Using API Version  1
	I0908 14:11:19.000590 1139355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 14:11:19.000987 1139355 main.go:141] libmachine: () Calling .GetMachineName
	I0908 14:11:19.001287 1139355 main.go:141] libmachine: (ha-385528) Calling .GetState
	I0908 14:11:19.002906 1139355 status.go:371] ha-385528 host status = "Stopped" (err=<nil>)
	I0908 14:11:19.002923 1139355 status.go:384] host is not running, skipping remaining checks
	I0908 14:11:19.002929 1139355 status.go:176] ha-385528 status: &{Name:ha-385528 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 14:11:19.002985 1139355 status.go:174] checking status of ha-385528-m02 ...
	I0908 14:11:19.003347 1139355 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 14:11:19.003398 1139355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 14:11:19.019306 1139355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33701
	I0908 14:11:19.019948 1139355 main.go:141] libmachine: () Calling .GetVersion
	I0908 14:11:19.020543 1139355 main.go:141] libmachine: Using API Version  1
	I0908 14:11:19.020576 1139355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 14:11:19.020924 1139355 main.go:141] libmachine: () Calling .GetMachineName
	I0908 14:11:19.021109 1139355 main.go:141] libmachine: (ha-385528-m02) Calling .GetState
	I0908 14:11:19.022740 1139355 status.go:371] ha-385528-m02 host status = "Stopped" (err=<nil>)
	I0908 14:11:19.022757 1139355 status.go:384] host is not running, skipping remaining checks
	I0908 14:11:19.022763 1139355 status.go:176] ha-385528-m02 status: &{Name:ha-385528-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 14:11:19.022794 1139355 status.go:174] checking status of ha-385528-m04 ...
	I0908 14:11:19.023255 1139355 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 14:11:19.023304 1139355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 14:11:19.040162 1139355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41417
	I0908 14:11:19.040683 1139355 main.go:141] libmachine: () Calling .GetVersion
	I0908 14:11:19.041257 1139355 main.go:141] libmachine: Using API Version  1
	I0908 14:11:19.041282 1139355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 14:11:19.041685 1139355 main.go:141] libmachine: () Calling .GetMachineName
	I0908 14:11:19.041934 1139355 main.go:141] libmachine: (ha-385528-m04) Calling .GetState
	I0908 14:11:19.043693 1139355 status.go:371] ha-385528-m04 host status = "Stopped" (err=<nil>)
	I0908 14:11:19.043717 1139355 status.go:384] host is not running, skipping remaining checks
	I0908 14:11:19.043723 1139355 status.go:176] ha-385528-m04 status: &{Name:ha-385528-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (272.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (115.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-385528 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (1m54.509090283s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (115.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (115.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 node add --control-plane --alsologtostderr -v 5
E0908 14:13:53.441110 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/addons-674449/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:14:31.163522 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/functional-864151/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-385528 node add --control-plane --alsologtostderr -v 5: (1m54.274688775s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-385528 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (115.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.98s)

                                                
                                    
x
+
TestJSONOutput/start/Command (88.91s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-062686 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-062686 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m28.911186158s)
--- PASS: TestJSONOutput/start/Command (88.91s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.86s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-062686 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.86s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.78s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-062686 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.78s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.38s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-062686 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-062686 --output=json --user=testUser: (7.376966236s)
--- PASS: TestJSONOutput/stop/Command (7.38s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-837779 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-837779 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (76.055254ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"60ea4bba-eeec-4a90-b5ca-a3343700521e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-837779] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5e0a894e-4e1b-44ad-a3d3-43009f8903e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21508"}}
	{"specversion":"1.0","id":"6b72568c-d5de-4207-b696-494a4ef0bb6d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f57c462f-e75e-43a2-892f-f9fc34626291","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21508-1116714/kubeconfig"}}
	{"specversion":"1.0","id":"8755438d-4752-4fb3-9737-c3a034c69a6c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-1116714/.minikube"}}
	{"specversion":"1.0","id":"427f5ce4-2506-4104-856c-9b4b5952135c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"e6d0bb1c-bc53-4d1a-a264-42c5aa27ebee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"333857c2-288c-4733-9f69-a34dafb584ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-837779" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-837779
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (99.72s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-131762 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-131762 --driver=kvm2  --container-runtime=crio: (45.497970622s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-154676 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-154676 --driver=kvm2  --container-runtime=crio: (51.096053995s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-131762
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-154676
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-154676" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-154676
helpers_test.go:175: Cleaning up "first-131762" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-131762
--- PASS: TestMinikubeProfile (99.72s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.89s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-377751 --memory=3072 --mount-string /tmp/TestMountStartserial885584212/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0908 14:18:53.441319 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/addons-674449/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-377751 --memory=3072 --mount-string /tmp/TestMountStartserial885584212/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.891373924s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-377751 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-377751 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (29.72s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-397917 --memory=3072 --mount-string /tmp/TestMountStartserial885584212/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0908 14:19:31.170512 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/functional-864151/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-397917 --memory=3072 --mount-string /tmp/TestMountStartserial885584212/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.71645461s)
--- PASS: TestMountStart/serial/StartWithMountSecond (29.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-397917 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-397917 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.42s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.63s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-377751 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-397917 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-397917 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.32s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-397917
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-397917: (2.32057479s)
--- PASS: TestMountStart/serial/Stop (2.32s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (24.01s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-397917
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-397917: (23.00983043s)
--- PASS: TestMountStart/serial/RestartStopped (24.01s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.43s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-397917 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-397917 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.43s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (114.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-546632 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-546632 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m54.225266604s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546632 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (114.71s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-546632 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-546632 -- rollout status deployment/busybox
E0908 14:21:56.511866 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/addons-674449/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-546632 -- rollout status deployment/busybox: (2.942264364s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-546632 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-546632 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-546632 -- exec busybox-7b57f96db7-m5q2c -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-546632 -- exec busybox-7b57f96db7-qvc9c -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-546632 -- exec busybox-7b57f96db7-m5q2c -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-546632 -- exec busybox-7b57f96db7-qvc9c -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-546632 -- exec busybox-7b57f96db7-m5q2c -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-546632 -- exec busybox-7b57f96db7-qvc9c -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.62s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-546632 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-546632 -- exec busybox-7b57f96db7-m5q2c -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-546632 -- exec busybox-7b57f96db7-m5q2c -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-546632 -- exec busybox-7b57f96db7-qvc9c -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-546632 -- exec busybox-7b57f96db7-qvc9c -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.89s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (49.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-546632 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-546632 -v=5 --alsologtostderr: (48.597577088s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546632 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (49.24s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-546632 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.66s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546632 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546632 cp testdata/cp-test.txt multinode-546632:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546632 ssh -n multinode-546632 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546632 cp multinode-546632:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1863889246/001/cp-test_multinode-546632.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546632 ssh -n multinode-546632 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546632 cp multinode-546632:/home/docker/cp-test.txt multinode-546632-m02:/home/docker/cp-test_multinode-546632_multinode-546632-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546632 ssh -n multinode-546632 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546632 ssh -n multinode-546632-m02 "sudo cat /home/docker/cp-test_multinode-546632_multinode-546632-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546632 cp multinode-546632:/home/docker/cp-test.txt multinode-546632-m03:/home/docker/cp-test_multinode-546632_multinode-546632-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546632 ssh -n multinode-546632 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546632 ssh -n multinode-546632-m03 "sudo cat /home/docker/cp-test_multinode-546632_multinode-546632-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546632 cp testdata/cp-test.txt multinode-546632-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546632 ssh -n multinode-546632-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546632 cp multinode-546632-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1863889246/001/cp-test_multinode-546632-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546632 ssh -n multinode-546632-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546632 cp multinode-546632-m02:/home/docker/cp-test.txt multinode-546632:/home/docker/cp-test_multinode-546632-m02_multinode-546632.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546632 ssh -n multinode-546632-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546632 ssh -n multinode-546632 "sudo cat /home/docker/cp-test_multinode-546632-m02_multinode-546632.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546632 cp multinode-546632-m02:/home/docker/cp-test.txt multinode-546632-m03:/home/docker/cp-test_multinode-546632-m02_multinode-546632-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546632 ssh -n multinode-546632-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546632 ssh -n multinode-546632-m03 "sudo cat /home/docker/cp-test_multinode-546632-m02_multinode-546632-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546632 cp testdata/cp-test.txt multinode-546632-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546632 ssh -n multinode-546632-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546632 cp multinode-546632-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1863889246/001/cp-test_multinode-546632-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546632 ssh -n multinode-546632-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546632 cp multinode-546632-m03:/home/docker/cp-test.txt multinode-546632:/home/docker/cp-test_multinode-546632-m03_multinode-546632.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546632 ssh -n multinode-546632-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546632 ssh -n multinode-546632 "sudo cat /home/docker/cp-test_multinode-546632-m03_multinode-546632.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546632 cp multinode-546632-m03:/home/docker/cp-test.txt multinode-546632-m02:/home/docker/cp-test_multinode-546632-m03_multinode-546632-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546632 ssh -n multinode-546632-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546632 ssh -n multinode-546632-m02 "sudo cat /home/docker/cp-test_multinode-546632-m03_multinode-546632-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.26s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546632 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-546632 node stop m03: (2.316809904s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546632 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-546632 status: exit status 7 (479.861088ms)

                                                
                                                
-- stdout --
	multinode-546632
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-546632-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-546632-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546632 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-546632 status --alsologtostderr: exit status 7 (488.340013ms)

                                                
                                                
-- stdout --
	multinode-546632
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-546632-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-546632-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 14:23:02.654576 1147569 out.go:360] Setting OutFile to fd 1 ...
	I0908 14:23:02.655254 1147569 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 14:23:02.655334 1147569 out.go:374] Setting ErrFile to fd 2...
	I0908 14:23:02.655379 1147569 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 14:23:02.655857 1147569 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-1116714/.minikube/bin
	I0908 14:23:02.656251 1147569 out.go:368] Setting JSON to false
	I0908 14:23:02.656315 1147569 mustload.go:65] Loading cluster: multinode-546632
	I0908 14:23:02.656463 1147569 notify.go:220] Checking for updates...
	I0908 14:23:02.657361 1147569 config.go:182] Loaded profile config "multinode-546632": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 14:23:02.657402 1147569 status.go:174] checking status of multinode-546632 ...
	I0908 14:23:02.657932 1147569 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 14:23:02.657997 1147569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 14:23:02.676881 1147569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42943
	I0908 14:23:02.677456 1147569 main.go:141] libmachine: () Calling .GetVersion
	I0908 14:23:02.678185 1147569 main.go:141] libmachine: Using API Version  1
	I0908 14:23:02.678215 1147569 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 14:23:02.678761 1147569 main.go:141] libmachine: () Calling .GetMachineName
	I0908 14:23:02.679017 1147569 main.go:141] libmachine: (multinode-546632) Calling .GetState
	I0908 14:23:02.680782 1147569 status.go:371] multinode-546632 host status = "Running" (err=<nil>)
	I0908 14:23:02.680808 1147569 host.go:66] Checking if "multinode-546632" exists ...
	I0908 14:23:02.681282 1147569 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 14:23:02.681369 1147569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 14:23:02.698803 1147569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44651
	I0908 14:23:02.699419 1147569 main.go:141] libmachine: () Calling .GetVersion
	I0908 14:23:02.700043 1147569 main.go:141] libmachine: Using API Version  1
	I0908 14:23:02.700075 1147569 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 14:23:02.700519 1147569 main.go:141] libmachine: () Calling .GetMachineName
	I0908 14:23:02.700762 1147569 main.go:141] libmachine: (multinode-546632) Calling .GetIP
	I0908 14:23:02.704108 1147569 main.go:141] libmachine: (multinode-546632) DBG | domain multinode-546632 has defined MAC address 52:54:00:c9:a2:59 in network mk-multinode-546632
	I0908 14:23:02.704534 1147569 main.go:141] libmachine: (multinode-546632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:a2:59", ip: ""} in network mk-multinode-546632: {Iface:virbr1 ExpiryTime:2025-09-08 15:20:17 +0000 UTC Type:0 Mac:52:54:00:c9:a2:59 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:multinode-546632 Clientid:01:52:54:00:c9:a2:59}
	I0908 14:23:02.704572 1147569 main.go:141] libmachine: (multinode-546632) DBG | domain multinode-546632 has defined IP address 192.168.39.108 and MAC address 52:54:00:c9:a2:59 in network mk-multinode-546632
	I0908 14:23:02.704701 1147569 host.go:66] Checking if "multinode-546632" exists ...
	I0908 14:23:02.705018 1147569 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 14:23:02.705065 1147569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 14:23:02.722268 1147569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37465
	I0908 14:23:02.722836 1147569 main.go:141] libmachine: () Calling .GetVersion
	I0908 14:23:02.723509 1147569 main.go:141] libmachine: Using API Version  1
	I0908 14:23:02.723538 1147569 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 14:23:02.724037 1147569 main.go:141] libmachine: () Calling .GetMachineName
	I0908 14:23:02.724391 1147569 main.go:141] libmachine: (multinode-546632) Calling .DriverName
	I0908 14:23:02.724649 1147569 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 14:23:02.724677 1147569 main.go:141] libmachine: (multinode-546632) Calling .GetSSHHostname
	I0908 14:23:02.728261 1147569 main.go:141] libmachine: (multinode-546632) DBG | domain multinode-546632 has defined MAC address 52:54:00:c9:a2:59 in network mk-multinode-546632
	I0908 14:23:02.728731 1147569 main.go:141] libmachine: (multinode-546632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:a2:59", ip: ""} in network mk-multinode-546632: {Iface:virbr1 ExpiryTime:2025-09-08 15:20:17 +0000 UTC Type:0 Mac:52:54:00:c9:a2:59 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:multinode-546632 Clientid:01:52:54:00:c9:a2:59}
	I0908 14:23:02.728761 1147569 main.go:141] libmachine: (multinode-546632) DBG | domain multinode-546632 has defined IP address 192.168.39.108 and MAC address 52:54:00:c9:a2:59 in network mk-multinode-546632
	I0908 14:23:02.728990 1147569 main.go:141] libmachine: (multinode-546632) Calling .GetSSHPort
	I0908 14:23:02.729249 1147569 main.go:141] libmachine: (multinode-546632) Calling .GetSSHKeyPath
	I0908 14:23:02.729436 1147569 main.go:141] libmachine: (multinode-546632) Calling .GetSSHUsername
	I0908 14:23:02.729644 1147569 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/multinode-546632/id_rsa Username:docker}
	I0908 14:23:02.817957 1147569 ssh_runner.go:195] Run: systemctl --version
	I0908 14:23:02.826371 1147569 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 14:23:02.846407 1147569 kubeconfig.go:125] found "multinode-546632" server: "https://192.168.39.108:8443"
	I0908 14:23:02.846448 1147569 api_server.go:166] Checking apiserver status ...
	I0908 14:23:02.846499 1147569 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 14:23:02.869082 1147569 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1401/cgroup
	W0908 14:23:02.881145 1147569 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1401/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0908 14:23:02.881224 1147569 ssh_runner.go:195] Run: ls
	I0908 14:23:02.886849 1147569 api_server.go:253] Checking apiserver healthz at https://192.168.39.108:8443/healthz ...
	I0908 14:23:02.893250 1147569 api_server.go:279] https://192.168.39.108:8443/healthz returned 200:
	ok
	I0908 14:23:02.893297 1147569 status.go:463] multinode-546632 apiserver status = Running (err=<nil>)
	I0908 14:23:02.893309 1147569 status.go:176] multinode-546632 status: &{Name:multinode-546632 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 14:23:02.893344 1147569 status.go:174] checking status of multinode-546632-m02 ...
	I0908 14:23:02.893676 1147569 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 14:23:02.893720 1147569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 14:23:02.910587 1147569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40703
	I0908 14:23:02.911083 1147569 main.go:141] libmachine: () Calling .GetVersion
	I0908 14:23:02.911594 1147569 main.go:141] libmachine: Using API Version  1
	I0908 14:23:02.911623 1147569 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 14:23:02.912043 1147569 main.go:141] libmachine: () Calling .GetMachineName
	I0908 14:23:02.912246 1147569 main.go:141] libmachine: (multinode-546632-m02) Calling .GetState
	I0908 14:23:02.914077 1147569 status.go:371] multinode-546632-m02 host status = "Running" (err=<nil>)
	I0908 14:23:02.914101 1147569 host.go:66] Checking if "multinode-546632-m02" exists ...
	I0908 14:23:02.914409 1147569 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 14:23:02.914448 1147569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 14:23:02.933216 1147569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35123
	I0908 14:23:02.933784 1147569 main.go:141] libmachine: () Calling .GetVersion
	I0908 14:23:02.934250 1147569 main.go:141] libmachine: Using API Version  1
	I0908 14:23:02.934276 1147569 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 14:23:02.934643 1147569 main.go:141] libmachine: () Calling .GetMachineName
	I0908 14:23:02.934793 1147569 main.go:141] libmachine: (multinode-546632-m02) Calling .GetIP
	I0908 14:23:02.938047 1147569 main.go:141] libmachine: (multinode-546632-m02) DBG | domain multinode-546632-m02 has defined MAC address 52:54:00:e6:34:53 in network mk-multinode-546632
	I0908 14:23:02.938506 1147569 main.go:141] libmachine: (multinode-546632-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:34:53", ip: ""} in network mk-multinode-546632: {Iface:virbr1 ExpiryTime:2025-09-08 15:21:20 +0000 UTC Type:0 Mac:52:54:00:e6:34:53 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:multinode-546632-m02 Clientid:01:52:54:00:e6:34:53}
	I0908 14:23:02.938544 1147569 main.go:141] libmachine: (multinode-546632-m02) DBG | domain multinode-546632-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:e6:34:53 in network mk-multinode-546632
	I0908 14:23:02.938732 1147569 host.go:66] Checking if "multinode-546632-m02" exists ...
	I0908 14:23:02.939057 1147569 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 14:23:02.939106 1147569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 14:23:02.956429 1147569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45179
	I0908 14:23:02.956900 1147569 main.go:141] libmachine: () Calling .GetVersion
	I0908 14:23:02.957372 1147569 main.go:141] libmachine: Using API Version  1
	I0908 14:23:02.957408 1147569 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 14:23:02.957837 1147569 main.go:141] libmachine: () Calling .GetMachineName
	I0908 14:23:02.958060 1147569 main.go:141] libmachine: (multinode-546632-m02) Calling .DriverName
	I0908 14:23:02.958283 1147569 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 14:23:02.958306 1147569 main.go:141] libmachine: (multinode-546632-m02) Calling .GetSSHHostname
	I0908 14:23:02.961266 1147569 main.go:141] libmachine: (multinode-546632-m02) DBG | domain multinode-546632-m02 has defined MAC address 52:54:00:e6:34:53 in network mk-multinode-546632
	I0908 14:23:02.961798 1147569 main.go:141] libmachine: (multinode-546632-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:34:53", ip: ""} in network mk-multinode-546632: {Iface:virbr1 ExpiryTime:2025-09-08 15:21:20 +0000 UTC Type:0 Mac:52:54:00:e6:34:53 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:multinode-546632-m02 Clientid:01:52:54:00:e6:34:53}
	I0908 14:23:02.961827 1147569 main.go:141] libmachine: (multinode-546632-m02) DBG | domain multinode-546632-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:e6:34:53 in network mk-multinode-546632
	I0908 14:23:02.961982 1147569 main.go:141] libmachine: (multinode-546632-m02) Calling .GetSSHPort
	I0908 14:23:02.962207 1147569 main.go:141] libmachine: (multinode-546632-m02) Calling .GetSSHKeyPath
	I0908 14:23:02.962414 1147569 main.go:141] libmachine: (multinode-546632-m02) Calling .GetSSHUsername
	I0908 14:23:02.962589 1147569 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21508-1116714/.minikube/machines/multinode-546632-m02/id_rsa Username:docker}
	I0908 14:23:03.048842 1147569 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 14:23:03.068150 1147569 status.go:176] multinode-546632-m02 status: &{Name:multinode-546632-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0908 14:23:03.068195 1147569 status.go:174] checking status of multinode-546632-m03 ...
	I0908 14:23:03.068549 1147569 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 14:23:03.068601 1147569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 14:23:03.085654 1147569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34279
	I0908 14:23:03.086270 1147569 main.go:141] libmachine: () Calling .GetVersion
	I0908 14:23:03.086882 1147569 main.go:141] libmachine: Using API Version  1
	I0908 14:23:03.086912 1147569 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 14:23:03.087312 1147569 main.go:141] libmachine: () Calling .GetMachineName
	I0908 14:23:03.087552 1147569 main.go:141] libmachine: (multinode-546632-m03) Calling .GetState
	I0908 14:23:03.089264 1147569 status.go:371] multinode-546632-m03 host status = "Stopped" (err=<nil>)
	I0908 14:23:03.089292 1147569 status.go:384] host is not running, skipping remaining checks
	I0908 14:23:03.089298 1147569 status.go:176] multinode-546632-m03 status: &{Name:multinode-546632-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.29s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (38.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546632 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-546632 node start m03 -v=5 --alsologtostderr: (37.51298661s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546632 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (38.25s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (349.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-546632
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-546632
E0908 14:23:53.441386 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/addons-674449/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:24:31.169893 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/functional-864151/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-546632: (3m3.591709043s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-546632 --wait=true -v=5 --alsologtostderr
E0908 14:27:34.232402 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/functional-864151/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:28:53.440161 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/addons-674449/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-546632 --wait=true -v=5 --alsologtostderr: (2m46.123777097s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-546632
E0908 14:29:31.163901 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/functional-864151/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiNode/serial/RestartKeepsNodes (349.83s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546632 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-546632 node delete m03: (2.254222787s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546632 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.86s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (181.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546632 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-546632 stop: (3m1.558886549s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546632 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-546632 status: exit status 7 (106.15072ms)

                                                
                                                
-- stdout --
	multinode-546632
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-546632-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546632 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-546632 status --alsologtostderr: exit status 7 (101.920364ms)

                                                
                                                
-- stdout --
	multinode-546632
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-546632-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 14:32:35.748356 1150374 out.go:360] Setting OutFile to fd 1 ...
	I0908 14:32:35.748622 1150374 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 14:32:35.748630 1150374 out.go:374] Setting ErrFile to fd 2...
	I0908 14:32:35.748635 1150374 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 14:32:35.748909 1150374 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-1116714/.minikube/bin
	I0908 14:32:35.749117 1150374 out.go:368] Setting JSON to false
	I0908 14:32:35.749151 1150374 mustload.go:65] Loading cluster: multinode-546632
	I0908 14:32:35.749241 1150374 notify.go:220] Checking for updates...
	I0908 14:32:35.749561 1150374 config.go:182] Loaded profile config "multinode-546632": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 14:32:35.749583 1150374 status.go:174] checking status of multinode-546632 ...
	I0908 14:32:35.750039 1150374 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 14:32:35.750093 1150374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 14:32:35.769349 1150374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35819
	I0908 14:32:35.769916 1150374 main.go:141] libmachine: () Calling .GetVersion
	I0908 14:32:35.770576 1150374 main.go:141] libmachine: Using API Version  1
	I0908 14:32:35.770615 1150374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 14:32:35.771141 1150374 main.go:141] libmachine: () Calling .GetMachineName
	I0908 14:32:35.771428 1150374 main.go:141] libmachine: (multinode-546632) Calling .GetState
	I0908 14:32:35.773451 1150374 status.go:371] multinode-546632 host status = "Stopped" (err=<nil>)
	I0908 14:32:35.773478 1150374 status.go:384] host is not running, skipping remaining checks
	I0908 14:32:35.773487 1150374 status.go:176] multinode-546632 status: &{Name:multinode-546632 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 14:32:35.773534 1150374 status.go:174] checking status of multinode-546632-m02 ...
	I0908 14:32:35.773976 1150374 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21508-1116714/.minikube/bin/docker-machine-driver-kvm2
	I0908 14:32:35.774039 1150374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 14:32:35.791556 1150374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34775
	I0908 14:32:35.792155 1150374 main.go:141] libmachine: () Calling .GetVersion
	I0908 14:32:35.792691 1150374 main.go:141] libmachine: Using API Version  1
	I0908 14:32:35.792718 1150374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 14:32:35.793106 1150374 main.go:141] libmachine: () Calling .GetMachineName
	I0908 14:32:35.793322 1150374 main.go:141] libmachine: (multinode-546632-m02) Calling .GetState
	I0908 14:32:35.795370 1150374 status.go:371] multinode-546632-m02 host status = "Stopped" (err=<nil>)
	I0908 14:32:35.795388 1150374 status.go:384] host is not running, skipping remaining checks
	I0908 14:32:35.795394 1150374 status.go:176] multinode-546632-m02 status: &{Name:multinode-546632-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (181.77s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (136.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-546632 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0908 14:33:53.440459 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/addons-674449/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:34:31.163987 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/functional-864151/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-546632 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m16.042164563s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546632 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (136.70s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (47.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-546632
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-546632-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-546632-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (75.016117ms)

                                                
                                                
-- stdout --
	* [multinode-546632-m02] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21508
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21508-1116714/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-1116714/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-546632-m02' is duplicated with machine name 'multinode-546632-m02' in profile 'multinode-546632'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-546632-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-546632-m03 --driver=kvm2  --container-runtime=crio: (46.336147894s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-546632
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-546632: exit status 80 (247.007946ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-546632 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-546632-m03 already exists in multinode-546632-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-546632-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (47.42s)

                                                
                                    
x
+
TestScheduledStopUnix (122.31s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-898001 --memory=3072 --driver=kvm2  --container-runtime=crio
E0908 14:38:36.515629 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/addons-674449/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:38:53.441692 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/addons-674449/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-898001 --memory=3072 --driver=kvm2  --container-runtime=crio: (50.401222878s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-898001 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-898001 -n scheduled-stop-898001
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-898001 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0908 14:39:22.879164 1120875 retry.go:31] will retry after 143.84µs: open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/scheduled-stop-898001/pid: no such file or directory
I0908 14:39:22.880348 1120875 retry.go:31] will retry after 161.943µs: open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/scheduled-stop-898001/pid: no such file or directory
I0908 14:39:22.881504 1120875 retry.go:31] will retry after 163.477µs: open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/scheduled-stop-898001/pid: no such file or directory
I0908 14:39:22.882688 1120875 retry.go:31] will retry after 262.085µs: open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/scheduled-stop-898001/pid: no such file or directory
I0908 14:39:22.883839 1120875 retry.go:31] will retry after 345.075µs: open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/scheduled-stop-898001/pid: no such file or directory
I0908 14:39:22.884993 1120875 retry.go:31] will retry after 491.323µs: open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/scheduled-stop-898001/pid: no such file or directory
I0908 14:39:22.886148 1120875 retry.go:31] will retry after 671.498µs: open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/scheduled-stop-898001/pid: no such file or directory
I0908 14:39:22.887331 1120875 retry.go:31] will retry after 1.79621ms: open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/scheduled-stop-898001/pid: no such file or directory
I0908 14:39:22.889622 1120875 retry.go:31] will retry after 1.597719ms: open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/scheduled-stop-898001/pid: no such file or directory
I0908 14:39:22.891878 1120875 retry.go:31] will retry after 2.928995ms: open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/scheduled-stop-898001/pid: no such file or directory
I0908 14:39:22.895177 1120875 retry.go:31] will retry after 8.33443ms: open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/scheduled-stop-898001/pid: no such file or directory
I0908 14:39:22.904554 1120875 retry.go:31] will retry after 5.233782ms: open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/scheduled-stop-898001/pid: no such file or directory
I0908 14:39:22.910895 1120875 retry.go:31] will retry after 15.888689ms: open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/scheduled-stop-898001/pid: no such file or directory
I0908 14:39:22.927254 1120875 retry.go:31] will retry after 9.928623ms: open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/scheduled-stop-898001/pid: no such file or directory
I0908 14:39:22.937597 1120875 retry.go:31] will retry after 23.092805ms: open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/scheduled-stop-898001/pid: no such file or directory
I0908 14:39:22.960845 1120875 retry.go:31] will retry after 32.390203ms: open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/scheduled-stop-898001/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-898001 --cancel-scheduled
E0908 14:39:31.168649 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/functional-864151/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-898001 -n scheduled-stop-898001
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-898001
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-898001 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-898001
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-898001: exit status 7 (83.875601ms)

                                                
                                                
-- stdout --
	scheduled-stop-898001
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-898001 -n scheduled-stop-898001
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-898001 -n scheduled-stop-898001: exit status 7 (79.121677ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-898001" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-898001
--- PASS: TestScheduledStopUnix (122.31s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (108.11s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.600441347 start -p running-upgrade-448633 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
E0908 14:44:31.164736 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/functional-864151/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.600441347 start -p running-upgrade-448633 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m15.397968238s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-448633 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-448633 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (31.2410402s)
helpers_test.go:175: Cleaning up "running-upgrade-448633" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-448633
--- PASS: TestRunningBinaryUpgrade (108.11s)

                                                
                                    
x
+
TestKubernetesUpgrade (255.62s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-048258 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-048258 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m42.609273615s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-048258
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-048258: (2.339247789s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-048258 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-048258 status --format={{.Host}}: exit status 7 (78.243618ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-048258 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-048258 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m10.578853564s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-048258 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-048258 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-048258 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 106 (95.649897ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-048258] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21508
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21508-1116714/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-1116714/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-048258
	    minikube start -p kubernetes-upgrade-048258 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0482582 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.0, by running:
	    
	    minikube start -p kubernetes-upgrade-048258 --kubernetes-version=v1.34.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-048258 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-048258 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m18.747927354s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-048258" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-048258
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-048258: (1.108935243s)
--- PASS: TestKubernetesUpgrade (255.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-918994 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-918994 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 14 (87.876103ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-918994] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21508
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21508-1116714/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-1116714/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (121.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-918994 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-918994 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (2m0.988236788s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-918994 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (121.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (9.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-918994 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-918994 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (8.094704578s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-918994 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-918994 status -o json: exit status 2 (300.479494ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-918994","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-918994
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (9.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (27.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-918994 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-918994 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (27.928140829s)
--- PASS: TestNoKubernetes/serial/Start (27.93s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.61s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.61s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (125.5s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.120934314 start -p stopped-upgrade-051487 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.120934314 start -p stopped-upgrade-051487 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m2.548297516s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.120934314 -p stopped-upgrade-051487 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.120934314 -p stopped-upgrade-051487 stop: (2.159751948s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-051487 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0908 14:44:14.234746 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/functional-864151/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-051487 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m0.792903478s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (125.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-918994 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-918994 "sudo systemctl is-active --quiet service kubelet": exit status 1 (222.286288ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-918994
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-918994: (1.402790708s)
--- PASS: TestNoKubernetes/serial/Stop (1.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (68.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-918994 --driver=kvm2  --container-runtime=crio
E0908 14:43:53.441044 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/addons-674449/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-918994 --driver=kvm2  --container-runtime=crio: (1m8.283977366s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (68.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-918994 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-918994 "sudo systemctl is-active --quiet service kubelet": exit status 1 (233.266793ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.44s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-051487
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-051487: (1.438085649s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.44s)

                                                
                                    
x
+
TestPause/serial/Start (111.01s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-120061 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-120061 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m51.014201559s)
--- PASS: TestPause/serial/Start (111.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-814283 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-814283 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (112.131526ms)

                                                
                                                
-- stdout --
	* [false-814283] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21508
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21508-1116714/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-1116714/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 14:45:40.874288 1158672 out.go:360] Setting OutFile to fd 1 ...
	I0908 14:45:40.874414 1158672 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 14:45:40.874421 1158672 out.go:374] Setting ErrFile to fd 2...
	I0908 14:45:40.874428 1158672 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 14:45:40.874633 1158672 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-1116714/.minikube/bin
	I0908 14:45:40.875256 1158672 out.go:368] Setting JSON to false
	I0908 14:45:40.876419 1158672 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":19685,"bootTime":1757323056,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 14:45:40.876490 1158672 start.go:140] virtualization: kvm guest
	I0908 14:45:40.878500 1158672 out.go:179] * [false-814283] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0908 14:45:40.879553 1158672 out.go:179]   - MINIKUBE_LOCATION=21508
	I0908 14:45:40.879590 1158672 notify.go:220] Checking for updates...
	I0908 14:45:40.881915 1158672 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 14:45:40.883096 1158672 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21508-1116714/kubeconfig
	I0908 14:45:40.884172 1158672 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-1116714/.minikube
	I0908 14:45:40.885248 1158672 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 14:45:40.886176 1158672 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 14:45:40.887747 1158672 config.go:182] Loaded profile config "cert-expiration-001432": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 14:45:40.887864 1158672 config.go:182] Loaded profile config "pause-120061": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 14:45:40.887967 1158672 config.go:182] Loaded profile config "running-upgrade-448633": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I0908 14:45:40.888105 1158672 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 14:45:40.928598 1158672 out.go:179] * Using the kvm2 driver based on user configuration
	I0908 14:45:40.929596 1158672 start.go:304] selected driver: kvm2
	I0908 14:45:40.929625 1158672 start.go:918] validating driver "kvm2" against <nil>
	I0908 14:45:40.929638 1158672 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 14:45:40.931629 1158672 out.go:203] 
	W0908 14:45:40.932985 1158672 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0908 14:45:40.934357 1158672 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-814283 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-814283

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-814283

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-814283

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-814283

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-814283

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-814283

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-814283

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-814283

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-814283

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-814283

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814283"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814283"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814283"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-814283

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814283"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814283"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-814283" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-814283" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-814283" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-814283" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-814283" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-814283" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-814283" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-814283" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814283"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814283"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814283"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814283"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814283"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-814283" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-814283" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-814283" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814283"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814283"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814283"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814283"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814283"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21508-1116714/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Sep 2025 14:41:42 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: cluster_info
server: https://192.168.50.183:8443
name: cert-expiration-001432
contexts:
- context:
cluster: cert-expiration-001432
extensions:
- extension:
last-update: Mon, 08 Sep 2025 14:41:42 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: context_info
namespace: default
user: cert-expiration-001432
name: cert-expiration-001432
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-001432
user:
client-certificate: /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/cert-expiration-001432/client.crt
client-key: /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/cert-expiration-001432/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-814283

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814283"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814283"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814283"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814283"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814283"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814283"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814283"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814283"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814283"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814283"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814283"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814283"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814283"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814283"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814283"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814283"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814283"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814283"

                                                
                                                
----------------------- debugLogs end: false-814283 [took: 3.331150201s] --------------------------------
helpers_test.go:175: Cleaning up "false-814283" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-814283
--- PASS: TestNetworkPlugins/group/false (3.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (135.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-454279 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-454279 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (2m15.222425618s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (135.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (144.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-301894 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-301894 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0: (2m24.258904455s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (144.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (138.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-372004 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-372004 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0: (2m18.235708158s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (138.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (85.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-391485 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-391485 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0: (1m25.283299192s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (85.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-454279 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [3e3a1d38-01a4-4706-9f0f-faae9e1a0d5f] Pending
helpers_test.go:352: "busybox" [3e3a1d38-01a4-4706-9f0f-faae9e1a0d5f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [3e3a1d38-01a4-4706-9f0f-faae9e1a0d5f] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.005269096s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-454279 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.77s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-454279 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-454279 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.460140725s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-454279 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (91.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-454279 --alsologtostderr -v=3
E0908 14:48:53.440497 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/addons-674449/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-454279 --alsologtostderr -v=3: (1m31.052229468s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (91.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-301894 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [4889d8a9-c946-418e-89ef-7a9da32d792a] Pending
helpers_test.go:352: "busybox" [4889d8a9-c946-418e-89ef-7a9da32d792a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [4889d8a9-c946-418e-89ef-7a9da32d792a] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.005867707s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-301894 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-301894 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-301894 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.250409676s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-301894 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (90.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-301894 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-301894 --alsologtostderr -v=3: (1m30.982452761s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (90.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-372004 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f1970266-fa30-4738-8bd1-583d3b292925] Pending
helpers_test.go:352: "busybox" [f1970266-fa30-4738-8bd1-583d3b292925] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0908 14:49:31.163090 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/functional-864151/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [f1970266-fa30-4738-8bd1-583d3b292925] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.005493303s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-372004 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-372004 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-372004 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.10058563s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-372004 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (90.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-372004 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-372004 --alsologtostderr -v=3: (1m30.966293863s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (90.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-391485 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [3f210d95-ffec-4c33-bada-f22081a7d612] Pending
helpers_test.go:352: "busybox" [3f210d95-ffec-4c33-bada-f22081a7d612] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [3f210d95-ffec-4c33-bada-f22081a7d612] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004331587s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-391485 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-391485 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-391485 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.008179839s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-391485 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-391485 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-391485 --alsologtostderr -v=3: (1m31.472706224s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-454279 -n old-k8s-version-454279
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-454279 -n old-k8s-version-454279: exit status 7 (78.154408ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-454279 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (51.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-454279 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-454279 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (51.481111489s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-454279 -n old-k8s-version-454279
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (51.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-301894 -n no-preload-301894
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-301894 -n no-preload-301894: exit status 7 (88.0353ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-301894 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (65.7s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-301894 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-301894 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0: (1m5.30224377s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-301894 -n no-preload-301894
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (65.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-g25dc" [b011db4b-ab35-410f-924c-13d3c67318da] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-g25dc" [b011db4b-ab35-410f-924c-13d3c67318da] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.005532402s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (10.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-372004 -n embed-certs-372004
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-372004 -n embed-certs-372004: exit status 7 (91.503346ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-372004 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (56.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-372004 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-372004 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0: (56.007876243s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-372004 -n embed-certs-372004
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (56.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-g25dc" [b011db4b-ab35-410f-924c-13d3c67318da] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005813489s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-454279 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-454279 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (4.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-454279 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p old-k8s-version-454279 --alsologtostderr -v=1: (1.212063844s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-454279 -n old-k8s-version-454279
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-454279 -n old-k8s-version-454279: exit status 2 (325.584801ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-454279 -n old-k8s-version-454279
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-454279 -n old-k8s-version-454279: exit status 2 (363.742684ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-454279 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p old-k8s-version-454279 --alsologtostderr -v=1: (1.161762858s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-454279 -n old-k8s-version-454279
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-454279 -n old-k8s-version-454279
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (4.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (72.47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-202543 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-202543 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0: (1m12.474657465s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (72.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-391485 -n default-k8s-diff-port-391485
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-391485 -n default-k8s-diff-port-391485: exit status 7 (90.971795ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-391485 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (88.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-391485 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-391485 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0: (1m27.892846585s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-391485 -n default-k8s-diff-port-391485
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (88.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (9.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-w6xtj" [8d9f1e13-68e8-40d5-bfb8-9eff9d0d4541] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-w6xtj" [8d9f1e13-68e8-40d5-bfb8-9eff9d0d4541] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.004245154s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (9.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-w6xtj" [8d9f1e13-68e8-40d5-bfb8-9eff9d0d4541] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005522165s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-301894 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-mq4r6" [7b5f5f9d-9a61-4326-a9e1-af3c6dd5133d] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.007965836s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-301894 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (4.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-301894 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-301894 --alsologtostderr -v=1: (2.373484994s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-301894 -n no-preload-301894
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-301894 -n no-preload-301894: exit status 2 (320.309171ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-301894 -n no-preload-301894
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-301894 -n no-preload-301894: exit status 2 (322.270336ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-301894 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p no-preload-301894 --alsologtostderr -v=1: (1.087811387s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-301894 -n no-preload-301894
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-301894 -n no-preload-301894
--- PASS: TestStartStop/group/no-preload/serial/Pause (4.91s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-mq4r6" [7b5f5f9d-9a61-4326-a9e1-af3c6dd5133d] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006338683s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-372004 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (111.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-814283 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-814283 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m51.716209596s)
--- PASS: TestNetworkPlugins/group/auto/Start (111.72s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-372004 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.69s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-372004 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p embed-certs-372004 --alsologtostderr -v=1: (1.195982344s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-372004 -n embed-certs-372004
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-372004 -n embed-certs-372004: exit status 2 (352.76933ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-372004 -n embed-certs-372004
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-372004 -n embed-certs-372004: exit status 2 (342.269198ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-372004 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p embed-certs-372004 --alsologtostderr -v=1: (1.01108057s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-372004 -n embed-certs-372004
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-372004 -n embed-certs-372004
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (134.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-814283 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-814283 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (2m14.100552721s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (134.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.94s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-202543 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-202543 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.935955951s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.94s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (9.57s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-202543 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-202543 --alsologtostderr -v=3: (9.566288355s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (9.57s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-202543 -n newest-cni-202543
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-202543 -n newest-cni-202543: exit status 7 (81.484779ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-202543 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (78.59s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-202543 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-202543 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0: (1m18.068858138s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-202543 -n newest-cni-202543
E0908 14:54:06.502796 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/no-preload-301894/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:54:06.509282 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/no-preload-301894/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:54:06.520748 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/no-preload-301894/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:54:06.542308 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/no-preload-301894/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:54:06.583840 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/no-preload-301894/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:54:06.665426 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/no-preload-301894/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:54:06.826892 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/no-preload-301894/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (78.59s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-grtqw" [76136c65-d51d-4f96-9eab-666ed3428568] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004966348s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-grtqw" [76136c65-d51d-4f96-9eab-666ed3428568] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005415658s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-391485 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-391485 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-391485 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p default-k8s-diff-port-391485 --alsologtostderr -v=1: (1.06013822s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-391485 -n default-k8s-diff-port-391485
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-391485 -n default-k8s-diff-port-391485: exit status 2 (290.811202ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-391485 -n default-k8s-diff-port-391485
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-391485 -n default-k8s-diff-port-391485: exit status 2 (329.565446ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-391485 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-391485 -n default-k8s-diff-port-391485
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-391485 -n default-k8s-diff-port-391485
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (112.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-814283 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
E0908 14:53:29.094461 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:53:29.101006 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:53:29.112606 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:53:29.134157 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:53:29.176383 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:53:29.258296 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:53:29.420243 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:53:29.742449 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:53:30.384538 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:53:31.666403 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:53:34.228404 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:53:39.350984 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:53:49.592952 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:53:53.440810 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/addons-674449/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-814283 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m52.322698788s)
--- PASS: TestNetworkPlugins/group/calico/Start (112.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-202543 image list --format=json
E0908 14:54:07.148302 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/no-preload-301894/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-814283 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (4.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-202543 --alsologtostderr -v=1
I0908 14:54:07.437438 1120875 config.go:182] Loaded profile config "auto-814283": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-202543 --alsologtostderr -v=1: (1.18904361s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-202543 -n newest-cni-202543
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-202543 -n newest-cni-202543: exit status 2 (383.052509ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-202543 -n newest-cni-202543
E0908 14:54:09.071439 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/no-preload-301894/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-202543 -n newest-cni-202543: exit status 2 (351.966931ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-202543 --alsologtostderr -v=1
E0908 14:54:10.074956 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p newest-cni-202543 --alsologtostderr -v=1: (1.346838338s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-202543 -n newest-cni-202543
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-202543 -n newest-cni-202543
--- PASS: TestStartStop/group/newest-cni/serial/Pause (4.13s)
E0908 14:56:12.958887 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:56:13.134228 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/default-k8s-diff-port-391485/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-814283 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-k7zlz" [7ba0ab44-71e6-4089-bbe5-932ffbc31c08] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0908 14:54:07.790050 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/no-preload-301894/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-k7zlz" [7ba0ab44-71e6-4089-bbe5-932ffbc31c08] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.005562982s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (82.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-814283 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E0908 14:54:16.755775 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/no-preload-301894/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-814283 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m22.284621621s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (82.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-814283 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-814283 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-814283 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (104.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-814283 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-814283 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m44.063065952s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (104.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-z6hr6" [c823f103-0d54-4197-98a7-fc3cc3cac810] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.006720269s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-814283 "pgrep -a kubelet"
I0908 14:54:42.278440 1120875 config.go:182] Loaded profile config "kindnet-814283": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-814283 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-59vtb" [32839d7c-1e34-4d55-92c7-0195f9082ce9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0908 14:54:47.478948 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/no-preload-301894/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-59vtb" [32839d7c-1e34-4d55-92c7-0195f9082ce9] Running
E0908 14:54:51.036930 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/old-k8s-version-454279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:54:51.186195 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/default-k8s-diff-port-391485/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:54:51.192712 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/default-k8s-diff-port-391485/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:54:51.204249 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/default-k8s-diff-port-391485/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:54:51.226438 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/default-k8s-diff-port-391485/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:54:51.270517 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/default-k8s-diff-port-391485/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:54:51.352404 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/default-k8s-diff-port-391485/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:54:51.514766 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/default-k8s-diff-port-391485/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:54:51.836434 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/default-k8s-diff-port-391485/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:54:52.479012 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/default-k8s-diff-port-391485/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.006187786s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-814283 exec deployment/netcat -- nslookup kubernetes.default
E0908 14:54:53.760903 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/default-k8s-diff-port-391485/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-814283 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-814283 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-4xq47" [f735ca87-3bd3-4dfe-90f6-0656a4b30ad6] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
E0908 14:55:11.689282 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/default-k8s-diff-port-391485/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "calico-node-4xq47" [f735ca87-3bd3-4dfe-90f6-0656a4b30ad6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006169365s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (90.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-814283 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E0908 14:55:16.517689 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/addons-674449/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-814283 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m30.190853931s)
--- PASS: TestNetworkPlugins/group/flannel/Start (90.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-814283 "pgrep -a kubelet"
I0908 14:55:17.129519 1120875 config.go:182] Loaded profile config "calico-814283": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (14.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-814283 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-kgwqb" [6b0233db-3505-4b14-9e05-fa0d8135e8f7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-kgwqb" [6b0233db-3505-4b14-9e05-fa0d8135e8f7] Running
E0908 14:55:28.440955 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/no-preload-301894/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 14.00511035s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (14.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-814283 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-814283 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-814283 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-814283 "pgrep -a kubelet"
I0908 14:55:35.579330 1120875 config.go:182] Loaded profile config "custom-flannel-814283": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-814283 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-nwz47" [23a7519e-3a69-43a1-a66b-a7160d6e53a3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-nwz47" [23a7519e-3a69-43a1-a66b-a7160d6e53a3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.006389111s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-814283 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-814283 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-814283 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (99.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-814283 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-814283 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m39.127560306s)
--- PASS: TestNetworkPlugins/group/bridge/Start (99.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-814283 "pgrep -a kubelet"
I0908 14:56:20.300672 1120875 config.go:182] Loaded profile config "enable-default-cni-814283": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-814283 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-gqcwq" [b699f6c7-f9eb-43dc-88fe-0f5bd379bad6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-gqcwq" [b699f6c7-f9eb-43dc-88fe-0f5bd379bad6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.005694747s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-814283 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-814283 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-814283 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-rflz8" [0f2ec6fd-c47e-44d5-9acf-1f3c8e8720d4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004448435s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-814283 "pgrep -a kubelet"
I0908 14:56:49.809285 1120875 config.go:182] Loaded profile config "flannel-814283": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-814283 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-jr5j2" [1923b4c2-1de2-44cc-bdee-0e57d0ad109d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0908 14:56:50.362813 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/no-preload-301894/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-jr5j2" [1923b4c2-1de2-44cc-bdee-0e57d0ad109d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.00559646s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-814283 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-814283 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-814283 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-814283 "pgrep -a kubelet"
I0908 14:57:32.267709 1120875 config.go:182] Loaded profile config "bridge-814283": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-814283 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-sw8vw" [434a1330-8b25-4f94-92b6-5a3fe41a50f0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0908 14:57:35.055588 1120875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/default-k8s-diff-port-391485/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-sw8vw" [434a1330-8b25-4f94-92b6-5a3fe41a50f0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003986198s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-814283 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-814283 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-814283 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    

Test skip (40/324)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.0/cached-images 0
15 TestDownloadOnly/v1.34.0/binaries 0
16 TestDownloadOnly/v1.34.0/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.35
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
56 TestHyperKitDriverInstallOrUpdate 0
57 TestHyperkitDriverSkipUpgrade 0
108 TestFunctional/parallel/DockerEnv 0
109 TestFunctional/parallel/PodmanEnv 0
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
126 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
127 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
128 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
129 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
130 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
131 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
157 TestFunctionalNewestKubernetes 0
158 TestGvisorAddon 0
180 TestImageBuild 0
207 TestKicCustomNetwork 0
208 TestKicExistingNetwork 0
209 TestKicCustomSubnet 0
210 TestKicStaticIP 0
242 TestChangeNoneUser 0
245 TestScheduledStopWindows 0
247 TestSkaffold 0
249 TestInsufficientStorage 0
253 TestMissingContainerUpgrade 0
272 TestStartStop/group/disable-driver-mounts 0.2
278 TestNetworkPlugins/group/kubenet 3.87
286 TestNetworkPlugins/group/cilium 4.15
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.35s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-674449 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.35s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-737327" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-737327
--- SKIP: TestStartStop/group/disable-driver-mounts (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-814283 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-814283

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-814283

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-814283

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-814283

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-814283

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-814283

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-814283

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-814283

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-814283

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-814283

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814283"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814283"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814283"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-814283

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814283"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814283"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-814283" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-814283" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-814283" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-814283" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-814283" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-814283" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-814283" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-814283" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814283"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814283"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814283"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814283"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814283"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-814283" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-814283" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-814283" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814283"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814283"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814283"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814283"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814283"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21508-1116714/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Sep 2025 14:41:42 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: cluster_info
server: https://192.168.50.183:8443
name: cert-expiration-001432
contexts:
- context:
cluster: cert-expiration-001432
extensions:
- extension:
last-update: Mon, 08 Sep 2025 14:41:42 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: context_info
namespace: default
user: cert-expiration-001432
name: cert-expiration-001432
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-001432
user:
client-certificate: /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/cert-expiration-001432/client.crt
client-key: /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/cert-expiration-001432/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-814283

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814283"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814283"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814283"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814283"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814283"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814283"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814283"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814283"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814283"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814283"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814283"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814283"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814283"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814283"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814283"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814283"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814283"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814283"

                                                
                                                
----------------------- debugLogs end: kubenet-814283 [took: 3.703039208s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-814283" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-814283
--- SKIP: TestNetworkPlugins/group/kubenet (3.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-814283 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-814283

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-814283

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-814283

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-814283

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-814283

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-814283

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-814283

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-814283

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-814283

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-814283

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814283"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814283"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814283"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-814283

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814283"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814283"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-814283" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-814283" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-814283" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-814283" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-814283" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-814283" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-814283" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-814283" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814283"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814283"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814283"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814283"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814283"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-814283

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-814283

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-814283" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-814283" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-814283

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-814283

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-814283" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-814283" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-814283" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-814283" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-814283" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814283"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814283"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814283"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814283"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814283"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21508-1116714/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Sep 2025 14:41:42 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: cluster_info
server: https://192.168.50.183:8443
name: cert-expiration-001432
contexts:
- context:
cluster: cert-expiration-001432
extensions:
- extension:
last-update: Mon, 08 Sep 2025 14:41:42 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: context_info
namespace: default
user: cert-expiration-001432
name: cert-expiration-001432
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-001432
user:
client-certificate: /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/cert-expiration-001432/client.crt
client-key: /home/jenkins/minikube-integration/21508-1116714/.minikube/profiles/cert-expiration-001432/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-814283

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814283"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814283"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814283"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814283"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814283"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814283"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814283"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814283"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814283"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814283"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814283"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814283"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814283"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814283"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814283"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814283"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814283"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-814283" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814283"

                                                
                                                
----------------------- debugLogs end: cilium-814283 [took: 3.956455474s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-814283" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-814283
--- SKIP: TestNetworkPlugins/group/cilium (4.15s)

                                                
                                    
Copied to clipboard