Test Report: KVM_Linux_crio 21701

                    
                      39a663ec30ddfd049b0783b78fdfbb9970ee2a8a:2025-10-06:41791
                    
                

Test fail (6/321)

Order failed test Duration
37 TestAddons/parallel/Ingress 155.59
147 TestFunctional/parallel/MountCmd/specific-port 13.22
244 TestPreload 137.13
252 TestKubernetesUpgrade 931.33
273 TestNoKubernetes/serial/ProfileList 124.03
281 TestPause/serial/SecondStartNoReconfiguration 42.09
x
+
TestAddons/parallel/Ingress (155.59s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-395535 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-395535 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-395535 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [795dc70e-a62b-4ffc-a2c2-c63baf69c4c2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [795dc70e-a62b-4ffc-a2c2-c63baf69c4c2] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.005460667s
I1006 13:53:37.189859  743851 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-395535 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-395535 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m12.752090609s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-395535 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-395535 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.36
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-395535 -n addons-395535
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-395535 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-395535 logs -n 25: (1.641269678s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                                ARGS                                                                                                                                                                                                                                                │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-672709                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-672709 │ jenkins │ v1.37.0 │ 06 Oct 25 13:50 UTC │ 06 Oct 25 13:50 UTC │
	│ start   │ --download-only -p binary-mirror-278171 --alsologtostderr --binary-mirror http://127.0.0.1:43617 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                                                                                                                                                                                               │ binary-mirror-278171 │ jenkins │ v1.37.0 │ 06 Oct 25 13:50 UTC │                     │
	│ delete  │ -p binary-mirror-278171                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ binary-mirror-278171 │ jenkins │ v1.37.0 │ 06 Oct 25 13:50 UTC │ 06 Oct 25 13:50 UTC │
	│ addons  │ disable dashboard -p addons-395535                                                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-395535        │ jenkins │ v1.37.0 │ 06 Oct 25 13:50 UTC │                     │
	│ addons  │ enable dashboard -p addons-395535                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-395535        │ jenkins │ v1.37.0 │ 06 Oct 25 13:50 UTC │                     │
	│ start   │ -p addons-395535 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-395535        │ jenkins │ v1.37.0 │ 06 Oct 25 13:50 UTC │ 06 Oct 25 13:52 UTC │
	│ addons  │ addons-395535 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-395535        │ jenkins │ v1.37.0 │ 06 Oct 25 13:52 UTC │ 06 Oct 25 13:52 UTC │
	│ addons  │ addons-395535 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-395535        │ jenkins │ v1.37.0 │ 06 Oct 25 13:52 UTC │ 06 Oct 25 13:53 UTC │
	│ addons  │ enable headlamp -p addons-395535 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-395535        │ jenkins │ v1.37.0 │ 06 Oct 25 13:53 UTC │ 06 Oct 25 13:53 UTC │
	│ addons  │ addons-395535 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-395535        │ jenkins │ v1.37.0 │ 06 Oct 25 13:53 UTC │ 06 Oct 25 13:53 UTC │
	│ addons  │ addons-395535 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-395535        │ jenkins │ v1.37.0 │ 06 Oct 25 13:53 UTC │ 06 Oct 25 13:53 UTC │
	│ addons  │ addons-395535 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-395535        │ jenkins │ v1.37.0 │ 06 Oct 25 13:53 UTC │ 06 Oct 25 13:53 UTC │
	│ ip      │ addons-395535 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-395535        │ jenkins │ v1.37.0 │ 06 Oct 25 13:53 UTC │ 06 Oct 25 13:53 UTC │
	│ addons  │ addons-395535 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-395535        │ jenkins │ v1.37.0 │ 06 Oct 25 13:53 UTC │ 06 Oct 25 13:53 UTC │
	│ ssh     │ addons-395535 ssh cat /opt/local-path-provisioner/pvc-7d09967f-9c88-48c9-87eb-fc54c796f56b_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                                                  │ addons-395535        │ jenkins │ v1.37.0 │ 06 Oct 25 13:53 UTC │ 06 Oct 25 13:53 UTC │
	│ addons  │ addons-395535 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-395535        │ jenkins │ v1.37.0 │ 06 Oct 25 13:53 UTC │ 06 Oct 25 13:53 UTC │
	│ addons  │ addons-395535 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-395535        │ jenkins │ v1.37.0 │ 06 Oct 25 13:53 UTC │ 06 Oct 25 13:53 UTC │
	│ addons  │ addons-395535 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-395535        │ jenkins │ v1.37.0 │ 06 Oct 25 13:53 UTC │ 06 Oct 25 13:53 UTC │
	│ addons  │ addons-395535 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-395535        │ jenkins │ v1.37.0 │ 06 Oct 25 13:53 UTC │ 06 Oct 25 13:53 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-395535                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-395535        │ jenkins │ v1.37.0 │ 06 Oct 25 13:53 UTC │ 06 Oct 25 13:53 UTC │
	│ ssh     │ addons-395535 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-395535        │ jenkins │ v1.37.0 │ 06 Oct 25 13:53 UTC │                     │
	│ addons  │ addons-395535 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-395535        │ jenkins │ v1.37.0 │ 06 Oct 25 13:53 UTC │ 06 Oct 25 13:53 UTC │
	│ addons  │ addons-395535 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-395535        │ jenkins │ v1.37.0 │ 06 Oct 25 13:54 UTC │ 06 Oct 25 13:54 UTC │
	│ addons  │ addons-395535 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-395535        │ jenkins │ v1.37.0 │ 06 Oct 25 13:54 UTC │ 06 Oct 25 13:54 UTC │
	│ ip      │ addons-395535 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-395535        │ jenkins │ v1.37.0 │ 06 Oct 25 13:55 UTC │ 06 Oct 25 13:55 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 13:50:25
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 13:50:25.321577  744457 out.go:360] Setting OutFile to fd 1 ...
	I1006 13:50:25.321864  744457 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 13:50:25.321874  744457 out.go:374] Setting ErrFile to fd 2...
	I1006 13:50:25.321879  744457 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 13:50:25.322149  744457 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-739942/.minikube/bin
	I1006 13:50:25.322784  744457 out.go:368] Setting JSON to false
	I1006 13:50:25.323773  744457 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":12776,"bootTime":1759745849,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 13:50:25.323875  744457 start.go:140] virtualization: kvm guest
	I1006 13:50:25.325665  744457 out.go:179] * [addons-395535] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1006 13:50:25.326922  744457 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 13:50:25.326929  744457 notify.go:220] Checking for updates...
	I1006 13:50:25.329338  744457 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 13:50:25.330965  744457 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-739942/kubeconfig
	I1006 13:50:25.332321  744457 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-739942/.minikube
	I1006 13:50:25.333866  744457 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1006 13:50:25.335156  744457 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 13:50:25.337032  744457 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 13:50:25.367965  744457 out.go:179] * Using the kvm2 driver based on user configuration
	I1006 13:50:25.369426  744457 start.go:304] selected driver: kvm2
	I1006 13:50:25.369447  744457 start.go:924] validating driver "kvm2" against <nil>
	I1006 13:50:25.369480  744457 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 13:50:25.370520  744457 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 13:50:25.370637  744457 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21701-739942/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1006 13:50:25.384913  744457 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1006 13:50:25.384947  744457 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21701-739942/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1006 13:50:25.398983  744457 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1006 13:50:25.399065  744457 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1006 13:50:25.399472  744457 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 13:50:25.399511  744457 cni.go:84] Creating CNI manager for ""
	I1006 13:50:25.399574  744457 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1006 13:50:25.399596  744457 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1006 13:50:25.399663  744457 start.go:348] cluster config:
	{Name:addons-395535 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-395535 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1006 13:50:25.399803  744457 iso.go:125] acquiring lock: {Name:mk8de6812bb58933af0bc6eb1d955bf118a3bcec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 13:50:25.402728  744457 out.go:179] * Starting "addons-395535" primary control-plane node in "addons-395535" cluster
	I1006 13:50:25.403965  744457 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 13:50:25.404022  744457 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-739942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1006 13:50:25.404035  744457 cache.go:58] Caching tarball of preloaded images
	I1006 13:50:25.404140  744457 preload.go:233] Found /home/jenkins/minikube-integration/21701-739942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1006 13:50:25.404150  744457 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1006 13:50:25.404455  744457 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/addons-395535/config.json ...
	I1006 13:50:25.404487  744457 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/addons-395535/config.json: {Name:mk347476fa0232453e34566b97b405adba931cbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 13:50:25.404664  744457 start.go:360] acquireMachinesLock for addons-395535: {Name:mkc5be1cfc8fcefa1839aef4c67a376cc5095e30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1006 13:50:25.404723  744457 start.go:364] duration metric: took 41.504µs to acquireMachinesLock for "addons-395535"
	I1006 13:50:25.404741  744457 start.go:93] Provisioning new machine with config: &{Name:addons-395535 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-395535 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 13:50:25.404796  744457 start.go:125] createHost starting for "" (driver="kvm2")
	I1006 13:50:25.406323  744457 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1006 13:50:25.406470  744457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 13:50:25.406519  744457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 13:50:25.420437  744457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37761
	I1006 13:50:25.421025  744457 main.go:141] libmachine: () Calling .GetVersion
	I1006 13:50:25.421712  744457 main.go:141] libmachine: Using API Version  1
	I1006 13:50:25.421735  744457 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 13:50:25.422144  744457 main.go:141] libmachine: () Calling .GetMachineName
	I1006 13:50:25.422346  744457 main.go:141] libmachine: (addons-395535) Calling .GetMachineName
	I1006 13:50:25.422522  744457 main.go:141] libmachine: (addons-395535) Calling .DriverName
	I1006 13:50:25.422737  744457 start.go:159] libmachine.API.Create for "addons-395535" (driver="kvm2")
	I1006 13:50:25.422770  744457 client.go:168] LocalClient.Create starting
	I1006 13:50:25.422808  744457 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21701-739942/.minikube/certs/ca.pem
	I1006 13:50:25.869770  744457 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21701-739942/.minikube/certs/cert.pem
	I1006 13:50:25.995165  744457 main.go:141] libmachine: Running pre-create checks...
	I1006 13:50:25.995190  744457 main.go:141] libmachine: (addons-395535) Calling .PreCreateCheck
	I1006 13:50:25.995761  744457 main.go:141] libmachine: (addons-395535) Calling .GetConfigRaw
	I1006 13:50:25.996406  744457 main.go:141] libmachine: Creating machine...
	I1006 13:50:25.996428  744457 main.go:141] libmachine: (addons-395535) Calling .Create
	I1006 13:50:25.996700  744457 main.go:141] libmachine: (addons-395535) creating domain...
	I1006 13:50:25.996716  744457 main.go:141] libmachine: (addons-395535) creating network...
	I1006 13:50:25.998484  744457 main.go:141] libmachine: (addons-395535) DBG | found existing default network
	I1006 13:50:25.998866  744457 main.go:141] libmachine: (addons-395535) DBG | <network>
	I1006 13:50:25.998889  744457 main.go:141] libmachine: (addons-395535) DBG |   <name>default</name>
	I1006 13:50:25.998897  744457 main.go:141] libmachine: (addons-395535) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I1006 13:50:25.998905  744457 main.go:141] libmachine: (addons-395535) DBG |   <forward mode='nat'>
	I1006 13:50:25.998930  744457 main.go:141] libmachine: (addons-395535) DBG |     <nat>
	I1006 13:50:25.998946  744457 main.go:141] libmachine: (addons-395535) DBG |       <port start='1024' end='65535'/>
	I1006 13:50:25.998952  744457 main.go:141] libmachine: (addons-395535) DBG |     </nat>
	I1006 13:50:25.998961  744457 main.go:141] libmachine: (addons-395535) DBG |   </forward>
	I1006 13:50:25.998968  744457 main.go:141] libmachine: (addons-395535) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I1006 13:50:25.998979  744457 main.go:141] libmachine: (addons-395535) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I1006 13:50:25.999016  744457 main.go:141] libmachine: (addons-395535) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I1006 13:50:25.999035  744457 main.go:141] libmachine: (addons-395535) DBG |     <dhcp>
	I1006 13:50:25.999049  744457 main.go:141] libmachine: (addons-395535) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I1006 13:50:25.999085  744457 main.go:141] libmachine: (addons-395535) DBG |     </dhcp>
	I1006 13:50:25.999098  744457 main.go:141] libmachine: (addons-395535) DBG |   </ip>
	I1006 13:50:25.999107  744457 main.go:141] libmachine: (addons-395535) DBG | </network>
	I1006 13:50:25.999121  744457 main.go:141] libmachine: (addons-395535) DBG | 
	I1006 13:50:25.999705  744457 main.go:141] libmachine: (addons-395535) DBG | I1006 13:50:25.999532  744485 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000208f10}
	I1006 13:50:25.999776  744457 main.go:141] libmachine: (addons-395535) DBG | defining private network:
	I1006 13:50:25.999796  744457 main.go:141] libmachine: (addons-395535) DBG | 
	I1006 13:50:25.999802  744457 main.go:141] libmachine: (addons-395535) DBG | <network>
	I1006 13:50:25.999813  744457 main.go:141] libmachine: (addons-395535) DBG |   <name>mk-addons-395535</name>
	I1006 13:50:25.999821  744457 main.go:141] libmachine: (addons-395535) DBG |   <dns enable='no'/>
	I1006 13:50:25.999837  744457 main.go:141] libmachine: (addons-395535) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1006 13:50:25.999849  744457 main.go:141] libmachine: (addons-395535) DBG |     <dhcp>
	I1006 13:50:25.999857  744457 main.go:141] libmachine: (addons-395535) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1006 13:50:25.999869  744457 main.go:141] libmachine: (addons-395535) DBG |     </dhcp>
	I1006 13:50:25.999875  744457 main.go:141] libmachine: (addons-395535) DBG |   </ip>
	I1006 13:50:25.999885  744457 main.go:141] libmachine: (addons-395535) DBG | </network>
	I1006 13:50:25.999889  744457 main.go:141] libmachine: (addons-395535) DBG | 
	I1006 13:50:26.006941  744457 main.go:141] libmachine: (addons-395535) DBG | creating private network mk-addons-395535 192.168.39.0/24...
	I1006 13:50:26.077207  744457 main.go:141] libmachine: (addons-395535) DBG | private network mk-addons-395535 192.168.39.0/24 created
	I1006 13:50:26.077417  744457 main.go:141] libmachine: (addons-395535) DBG | <network>
	I1006 13:50:26.077454  744457 main.go:141] libmachine: (addons-395535) DBG |   <name>mk-addons-395535</name>
	I1006 13:50:26.077469  744457 main.go:141] libmachine: (addons-395535) setting up store path in /home/jenkins/minikube-integration/21701-739942/.minikube/machines/addons-395535 ...
	I1006 13:50:26.077488  744457 main.go:141] libmachine: (addons-395535) building disk image from file:///home/jenkins/minikube-integration/21701-739942/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I1006 13:50:26.077503  744457 main.go:141] libmachine: (addons-395535) DBG |   <uuid>4d39ebe5-e794-4be2-9580-9fbcdbafc18a</uuid>
	I1006 13:50:26.077514  744457 main.go:141] libmachine: (addons-395535) DBG |   <bridge name='virbr1' stp='on' delay='0'/>
	I1006 13:50:26.077546  744457 main.go:141] libmachine: (addons-395535) DBG |   <mac address='52:54:00:21:53:a8'/>
	I1006 13:50:26.077572  744457 main.go:141] libmachine: (addons-395535) Downloading /home/jenkins/minikube-integration/21701-739942/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21701-739942/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso...
	I1006 13:50:26.077599  744457 main.go:141] libmachine: (addons-395535) DBG |   <dns enable='no'/>
	I1006 13:50:26.077614  744457 main.go:141] libmachine: (addons-395535) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1006 13:50:26.077630  744457 main.go:141] libmachine: (addons-395535) DBG |     <dhcp>
	I1006 13:50:26.077643  744457 main.go:141] libmachine: (addons-395535) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1006 13:50:26.077649  744457 main.go:141] libmachine: (addons-395535) DBG |     </dhcp>
	I1006 13:50:26.077659  744457 main.go:141] libmachine: (addons-395535) DBG |   </ip>
	I1006 13:50:26.077663  744457 main.go:141] libmachine: (addons-395535) DBG | </network>
	I1006 13:50:26.077671  744457 main.go:141] libmachine: (addons-395535) DBG | 
	I1006 13:50:26.077680  744457 main.go:141] libmachine: (addons-395535) DBG | I1006 13:50:26.077419  744485 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21701-739942/.minikube
	I1006 13:50:26.357246  744457 main.go:141] libmachine: (addons-395535) DBG | I1006 13:50:26.357102  744485 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21701-739942/.minikube/machines/addons-395535/id_rsa...
	I1006 13:50:26.709764  744457 main.go:141] libmachine: (addons-395535) DBG | I1006 13:50:26.709610  744485 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21701-739942/.minikube/machines/addons-395535/addons-395535.rawdisk...
	I1006 13:50:26.709790  744457 main.go:141] libmachine: (addons-395535) DBG | Writing magic tar header
	I1006 13:50:26.709861  744457 main.go:141] libmachine: (addons-395535) DBG | Writing SSH key tar header
	I1006 13:50:26.709899  744457 main.go:141] libmachine: (addons-395535) DBG | I1006 13:50:26.709737  744485 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21701-739942/.minikube/machines/addons-395535 ...
	I1006 13:50:26.709938  744457 main.go:141] libmachine: (addons-395535) setting executable bit set on /home/jenkins/minikube-integration/21701-739942/.minikube/machines/addons-395535 (perms=drwx------)
	I1006 13:50:26.709956  744457 main.go:141] libmachine: (addons-395535) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21701-739942/.minikube/machines/addons-395535
	I1006 13:50:26.709963  744457 main.go:141] libmachine: (addons-395535) setting executable bit set on /home/jenkins/minikube-integration/21701-739942/.minikube/machines (perms=drwxr-xr-x)
	I1006 13:50:26.709994  744457 main.go:141] libmachine: (addons-395535) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21701-739942/.minikube/machines
	I1006 13:50:26.710014  744457 main.go:141] libmachine: (addons-395535) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21701-739942/.minikube
	I1006 13:50:26.710024  744457 main.go:141] libmachine: (addons-395535) setting executable bit set on /home/jenkins/minikube-integration/21701-739942/.minikube (perms=drwxr-xr-x)
	I1006 13:50:26.710033  744457 main.go:141] libmachine: (addons-395535) setting executable bit set on /home/jenkins/minikube-integration/21701-739942 (perms=drwxrwxr-x)
	I1006 13:50:26.710039  744457 main.go:141] libmachine: (addons-395535) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1006 13:50:26.710044  744457 main.go:141] libmachine: (addons-395535) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21701-739942
	I1006 13:50:26.710060  744457 main.go:141] libmachine: (addons-395535) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1006 13:50:26.710068  744457 main.go:141] libmachine: (addons-395535) DBG | checking permissions on dir: /home/jenkins
	I1006 13:50:26.710075  744457 main.go:141] libmachine: (addons-395535) DBG | checking permissions on dir: /home
	I1006 13:50:26.710082  744457 main.go:141] libmachine: (addons-395535) DBG | skipping /home - not owner
	I1006 13:50:26.710101  744457 main.go:141] libmachine: (addons-395535) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1006 13:50:26.710115  744457 main.go:141] libmachine: (addons-395535) defining domain...
	I1006 13:50:26.711290  744457 main.go:141] libmachine: (addons-395535) defining domain using XML: 
	I1006 13:50:26.711313  744457 main.go:141] libmachine: (addons-395535) <domain type='kvm'>
	I1006 13:50:26.711336  744457 main.go:141] libmachine: (addons-395535)   <name>addons-395535</name>
	I1006 13:50:26.711347  744457 main.go:141] libmachine: (addons-395535)   <memory unit='MiB'>4096</memory>
	I1006 13:50:26.711353  744457 main.go:141] libmachine: (addons-395535)   <vcpu>2</vcpu>
	I1006 13:50:26.711369  744457 main.go:141] libmachine: (addons-395535)   <features>
	I1006 13:50:26.711386  744457 main.go:141] libmachine: (addons-395535)     <acpi/>
	I1006 13:50:26.711394  744457 main.go:141] libmachine: (addons-395535)     <apic/>
	I1006 13:50:26.711405  744457 main.go:141] libmachine: (addons-395535)     <pae/>
	I1006 13:50:26.711409  744457 main.go:141] libmachine: (addons-395535)   </features>
	I1006 13:50:26.711416  744457 main.go:141] libmachine: (addons-395535)   <cpu mode='host-passthrough'>
	I1006 13:50:26.711420  744457 main.go:141] libmachine: (addons-395535)   </cpu>
	I1006 13:50:26.711427  744457 main.go:141] libmachine: (addons-395535)   <os>
	I1006 13:50:26.711433  744457 main.go:141] libmachine: (addons-395535)     <type>hvm</type>
	I1006 13:50:26.711456  744457 main.go:141] libmachine: (addons-395535)     <boot dev='cdrom'/>
	I1006 13:50:26.711479  744457 main.go:141] libmachine: (addons-395535)     <boot dev='hd'/>
	I1006 13:50:26.711491  744457 main.go:141] libmachine: (addons-395535)     <bootmenu enable='no'/>
	I1006 13:50:26.711504  744457 main.go:141] libmachine: (addons-395535)   </os>
	I1006 13:50:26.711526  744457 main.go:141] libmachine: (addons-395535)   <devices>
	I1006 13:50:26.711540  744457 main.go:141] libmachine: (addons-395535)     <disk type='file' device='cdrom'>
	I1006 13:50:26.711554  744457 main.go:141] libmachine: (addons-395535)       <source file='/home/jenkins/minikube-integration/21701-739942/.minikube/machines/addons-395535/boot2docker.iso'/>
	I1006 13:50:26.711564  744457 main.go:141] libmachine: (addons-395535)       <target dev='hdc' bus='scsi'/>
	I1006 13:50:26.711619  744457 main.go:141] libmachine: (addons-395535)       <readonly/>
	I1006 13:50:26.711642  744457 main.go:141] libmachine: (addons-395535)     </disk>
	I1006 13:50:26.711656  744457 main.go:141] libmachine: (addons-395535)     <disk type='file' device='disk'>
	I1006 13:50:26.711669  744457 main.go:141] libmachine: (addons-395535)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1006 13:50:26.711684  744457 main.go:141] libmachine: (addons-395535)       <source file='/home/jenkins/minikube-integration/21701-739942/.minikube/machines/addons-395535/addons-395535.rawdisk'/>
	I1006 13:50:26.711695  744457 main.go:141] libmachine: (addons-395535)       <target dev='hda' bus='virtio'/>
	I1006 13:50:26.711707  744457 main.go:141] libmachine: (addons-395535)     </disk>
	I1006 13:50:26.711721  744457 main.go:141] libmachine: (addons-395535)     <interface type='network'>
	I1006 13:50:26.711735  744457 main.go:141] libmachine: (addons-395535)       <source network='mk-addons-395535'/>
	I1006 13:50:26.711742  744457 main.go:141] libmachine: (addons-395535)       <model type='virtio'/>
	I1006 13:50:26.711753  744457 main.go:141] libmachine: (addons-395535)     </interface>
	I1006 13:50:26.711762  744457 main.go:141] libmachine: (addons-395535)     <interface type='network'>
	I1006 13:50:26.711773  744457 main.go:141] libmachine: (addons-395535)       <source network='default'/>
	I1006 13:50:26.711784  744457 main.go:141] libmachine: (addons-395535)       <model type='virtio'/>
	I1006 13:50:26.711795  744457 main.go:141] libmachine: (addons-395535)     </interface>
	I1006 13:50:26.711806  744457 main.go:141] libmachine: (addons-395535)     <serial type='pty'>
	I1006 13:50:26.711816  744457 main.go:141] libmachine: (addons-395535)       <target port='0'/>
	I1006 13:50:26.711827  744457 main.go:141] libmachine: (addons-395535)     </serial>
	I1006 13:50:26.711836  744457 main.go:141] libmachine: (addons-395535)     <console type='pty'>
	I1006 13:50:26.711848  744457 main.go:141] libmachine: (addons-395535)       <target type='serial' port='0'/>
	I1006 13:50:26.711855  744457 main.go:141] libmachine: (addons-395535)     </console>
	I1006 13:50:26.711878  744457 main.go:141] libmachine: (addons-395535)     <rng model='virtio'>
	I1006 13:50:26.711896  744457 main.go:141] libmachine: (addons-395535)       <backend model='random'>/dev/random</backend>
	I1006 13:50:26.711907  744457 main.go:141] libmachine: (addons-395535)     </rng>
	I1006 13:50:26.711913  744457 main.go:141] libmachine: (addons-395535)   </devices>
	I1006 13:50:26.711924  744457 main.go:141] libmachine: (addons-395535) </domain>
	I1006 13:50:26.711932  744457 main.go:141] libmachine: (addons-395535) 
	I1006 13:50:26.719889  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined MAC address 52:54:00:b5:e4:ae in network default
	I1006 13:50:26.720514  744457 main.go:141] libmachine: (addons-395535) starting domain...
	I1006 13:50:26.720533  744457 main.go:141] libmachine: (addons-395535) ensuring networks are active...
	I1006 13:50:26.720539  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:50:26.721193  744457 main.go:141] libmachine: (addons-395535) Ensuring network default is active
	I1006 13:50:26.721553  744457 main.go:141] libmachine: (addons-395535) Ensuring network mk-addons-395535 is active
	I1006 13:50:26.722356  744457 main.go:141] libmachine: (addons-395535) getting domain XML...
	I1006 13:50:26.723445  744457 main.go:141] libmachine: (addons-395535) DBG | starting domain XML:
	I1006 13:50:26.723463  744457 main.go:141] libmachine: (addons-395535) DBG | <domain type='kvm'>
	I1006 13:50:26.723474  744457 main.go:141] libmachine: (addons-395535) DBG |   <name>addons-395535</name>
	I1006 13:50:26.723483  744457 main.go:141] libmachine: (addons-395535) DBG |   <uuid>e3dc6272-9d15-4eda-996b-fcf9fce5c454</uuid>
	I1006 13:50:26.723492  744457 main.go:141] libmachine: (addons-395535) DBG |   <memory unit='KiB'>4194304</memory>
	I1006 13:50:26.723500  744457 main.go:141] libmachine: (addons-395535) DBG |   <currentMemory unit='KiB'>4194304</currentMemory>
	I1006 13:50:26.723507  744457 main.go:141] libmachine: (addons-395535) DBG |   <vcpu placement='static'>2</vcpu>
	I1006 13:50:26.723512  744457 main.go:141] libmachine: (addons-395535) DBG |   <os>
	I1006 13:50:26.723522  744457 main.go:141] libmachine: (addons-395535) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1006 13:50:26.723527  744457 main.go:141] libmachine: (addons-395535) DBG |     <boot dev='cdrom'/>
	I1006 13:50:26.723535  744457 main.go:141] libmachine: (addons-395535) DBG |     <boot dev='hd'/>
	I1006 13:50:26.723556  744457 main.go:141] libmachine: (addons-395535) DBG |     <bootmenu enable='no'/>
	I1006 13:50:26.723563  744457 main.go:141] libmachine: (addons-395535) DBG |   </os>
	I1006 13:50:26.723571  744457 main.go:141] libmachine: (addons-395535) DBG |   <features>
	I1006 13:50:26.723579  744457 main.go:141] libmachine: (addons-395535) DBG |     <acpi/>
	I1006 13:50:26.723601  744457 main.go:141] libmachine: (addons-395535) DBG |     <apic/>
	I1006 13:50:26.723611  744457 main.go:141] libmachine: (addons-395535) DBG |     <pae/>
	I1006 13:50:26.723637  744457 main.go:141] libmachine: (addons-395535) DBG |   </features>
	I1006 13:50:26.723663  744457 main.go:141] libmachine: (addons-395535) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1006 13:50:26.723674  744457 main.go:141] libmachine: (addons-395535) DBG |   <clock offset='utc'/>
	I1006 13:50:26.723699  744457 main.go:141] libmachine: (addons-395535) DBG |   <on_poweroff>destroy</on_poweroff>
	I1006 13:50:26.723712  744457 main.go:141] libmachine: (addons-395535) DBG |   <on_reboot>restart</on_reboot>
	I1006 13:50:26.723723  744457 main.go:141] libmachine: (addons-395535) DBG |   <on_crash>destroy</on_crash>
	I1006 13:50:26.723733  744457 main.go:141] libmachine: (addons-395535) DBG |   <devices>
	I1006 13:50:26.723744  744457 main.go:141] libmachine: (addons-395535) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1006 13:50:26.723758  744457 main.go:141] libmachine: (addons-395535) DBG |     <disk type='file' device='cdrom'>
	I1006 13:50:26.723772  744457 main.go:141] libmachine: (addons-395535) DBG |       <driver name='qemu' type='raw'/>
	I1006 13:50:26.723788  744457 main.go:141] libmachine: (addons-395535) DBG |       <source file='/home/jenkins/minikube-integration/21701-739942/.minikube/machines/addons-395535/boot2docker.iso'/>
	I1006 13:50:26.723802  744457 main.go:141] libmachine: (addons-395535) DBG |       <target dev='hdc' bus='scsi'/>
	I1006 13:50:26.723812  744457 main.go:141] libmachine: (addons-395535) DBG |       <readonly/>
	I1006 13:50:26.723824  744457 main.go:141] libmachine: (addons-395535) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1006 13:50:26.723839  744457 main.go:141] libmachine: (addons-395535) DBG |     </disk>
	I1006 13:50:26.723852  744457 main.go:141] libmachine: (addons-395535) DBG |     <disk type='file' device='disk'>
	I1006 13:50:26.723862  744457 main.go:141] libmachine: (addons-395535) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1006 13:50:26.723880  744457 main.go:141] libmachine: (addons-395535) DBG |       <source file='/home/jenkins/minikube-integration/21701-739942/.minikube/machines/addons-395535/addons-395535.rawdisk'/>
	I1006 13:50:26.723892  744457 main.go:141] libmachine: (addons-395535) DBG |       <target dev='hda' bus='virtio'/>
	I1006 13:50:26.723905  744457 main.go:141] libmachine: (addons-395535) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1006 13:50:26.723914  744457 main.go:141] libmachine: (addons-395535) DBG |     </disk>
	I1006 13:50:26.723926  744457 main.go:141] libmachine: (addons-395535) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1006 13:50:26.723943  744457 main.go:141] libmachine: (addons-395535) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1006 13:50:26.723955  744457 main.go:141] libmachine: (addons-395535) DBG |     </controller>
	I1006 13:50:26.723968  744457 main.go:141] libmachine: (addons-395535) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1006 13:50:26.723979  744457 main.go:141] libmachine: (addons-395535) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1006 13:50:26.723993  744457 main.go:141] libmachine: (addons-395535) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1006 13:50:26.724043  744457 main.go:141] libmachine: (addons-395535) DBG |     </controller>
	I1006 13:50:26.724055  744457 main.go:141] libmachine: (addons-395535) DBG |     <interface type='network'>
	I1006 13:50:26.724063  744457 main.go:141] libmachine: (addons-395535) DBG |       <mac address='52:54:00:55:35:3c'/>
	I1006 13:50:26.724071  744457 main.go:141] libmachine: (addons-395535) DBG |       <source network='mk-addons-395535'/>
	I1006 13:50:26.724098  744457 main.go:141] libmachine: (addons-395535) DBG |       <model type='virtio'/>
	I1006 13:50:26.724118  744457 main.go:141] libmachine: (addons-395535) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1006 13:50:26.724137  744457 main.go:141] libmachine: (addons-395535) DBG |     </interface>
	I1006 13:50:26.724154  744457 main.go:141] libmachine: (addons-395535) DBG |     <interface type='network'>
	I1006 13:50:26.724181  744457 main.go:141] libmachine: (addons-395535) DBG |       <mac address='52:54:00:b5:e4:ae'/>
	I1006 13:50:26.724204  744457 main.go:141] libmachine: (addons-395535) DBG |       <source network='default'/>
	I1006 13:50:26.724219  744457 main.go:141] libmachine: (addons-395535) DBG |       <model type='virtio'/>
	I1006 13:50:26.724233  744457 main.go:141] libmachine: (addons-395535) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1006 13:50:26.724244  744457 main.go:141] libmachine: (addons-395535) DBG |     </interface>
	I1006 13:50:26.724261  744457 main.go:141] libmachine: (addons-395535) DBG |     <serial type='pty'>
	I1006 13:50:26.724277  744457 main.go:141] libmachine: (addons-395535) DBG |       <target type='isa-serial' port='0'>
	I1006 13:50:26.724282  744457 main.go:141] libmachine: (addons-395535) DBG |         <model name='isa-serial'/>
	I1006 13:50:26.724290  744457 main.go:141] libmachine: (addons-395535) DBG |       </target>
	I1006 13:50:26.724301  744457 main.go:141] libmachine: (addons-395535) DBG |     </serial>
	I1006 13:50:26.724310  744457 main.go:141] libmachine: (addons-395535) DBG |     <console type='pty'>
	I1006 13:50:26.724321  744457 main.go:141] libmachine: (addons-395535) DBG |       <target type='serial' port='0'/>
	I1006 13:50:26.724332  744457 main.go:141] libmachine: (addons-395535) DBG |     </console>
	I1006 13:50:26.724347  744457 main.go:141] libmachine: (addons-395535) DBG |     <input type='mouse' bus='ps2'/>
	I1006 13:50:26.724358  744457 main.go:141] libmachine: (addons-395535) DBG |     <input type='keyboard' bus='ps2'/>
	I1006 13:50:26.724365  744457 main.go:141] libmachine: (addons-395535) DBG |     <audio id='1' type='none'/>
	I1006 13:50:26.724371  744457 main.go:141] libmachine: (addons-395535) DBG |     <memballoon model='virtio'>
	I1006 13:50:26.724383  744457 main.go:141] libmachine: (addons-395535) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1006 13:50:26.724395  744457 main.go:141] libmachine: (addons-395535) DBG |     </memballoon>
	I1006 13:50:26.724405  744457 main.go:141] libmachine: (addons-395535) DBG |     <rng model='virtio'>
	I1006 13:50:26.724428  744457 main.go:141] libmachine: (addons-395535) DBG |       <backend model='random'>/dev/random</backend>
	I1006 13:50:26.724445  744457 main.go:141] libmachine: (addons-395535) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1006 13:50:26.724462  744457 main.go:141] libmachine: (addons-395535) DBG |     </rng>
	I1006 13:50:26.724479  744457 main.go:141] libmachine: (addons-395535) DBG |   </devices>
	I1006 13:50:26.724494  744457 main.go:141] libmachine: (addons-395535) DBG | </domain>
	I1006 13:50:26.724510  744457 main.go:141] libmachine: (addons-395535) DBG | 
	I1006 13:50:27.191044  744457 main.go:141] libmachine: (addons-395535) waiting for domain to start...
	I1006 13:50:27.192345  744457 main.go:141] libmachine: (addons-395535) domain is now running
	I1006 13:50:27.192367  744457 main.go:141] libmachine: (addons-395535) waiting for IP...
	I1006 13:50:27.193099  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:50:27.193544  744457 main.go:141] libmachine: (addons-395535) DBG | no network interface addresses found for domain addons-395535 (source=lease)
	I1006 13:50:27.193563  744457 main.go:141] libmachine: (addons-395535) DBG | trying to list again with source=arp
	I1006 13:50:27.193829  744457 main.go:141] libmachine: (addons-395535) DBG | unable to find current IP address of domain addons-395535 in network mk-addons-395535 (interfaces detected: [])
	I1006 13:50:27.193909  744457 main.go:141] libmachine: (addons-395535) DBG | I1006 13:50:27.193859  744485 retry.go:31] will retry after 260.89346ms: waiting for domain to come up
	I1006 13:50:27.456390  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:50:27.456932  744457 main.go:141] libmachine: (addons-395535) DBG | no network interface addresses found for domain addons-395535 (source=lease)
	I1006 13:50:27.456971  744457 main.go:141] libmachine: (addons-395535) DBG | trying to list again with source=arp
	I1006 13:50:27.457186  744457 main.go:141] libmachine: (addons-395535) DBG | unable to find current IP address of domain addons-395535 in network mk-addons-395535 (interfaces detected: [])
	I1006 13:50:27.457210  744457 main.go:141] libmachine: (addons-395535) DBG | I1006 13:50:27.457163  744485 retry.go:31] will retry after 381.499855ms: waiting for domain to come up
	I1006 13:50:27.840866  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:50:27.841514  744457 main.go:141] libmachine: (addons-395535) DBG | no network interface addresses found for domain addons-395535 (source=lease)
	I1006 13:50:27.841577  744457 main.go:141] libmachine: (addons-395535) DBG | trying to list again with source=arp
	I1006 13:50:27.841800  744457 main.go:141] libmachine: (addons-395535) DBG | unable to find current IP address of domain addons-395535 in network mk-addons-395535 (interfaces detected: [])
	I1006 13:50:27.841857  744457 main.go:141] libmachine: (addons-395535) DBG | I1006 13:50:27.841771  744485 retry.go:31] will retry after 357.715828ms: waiting for domain to come up
	I1006 13:50:28.201479  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:50:28.201952  744457 main.go:141] libmachine: (addons-395535) DBG | no network interface addresses found for domain addons-395535 (source=lease)
	I1006 13:50:28.201972  744457 main.go:141] libmachine: (addons-395535) DBG | trying to list again with source=arp
	I1006 13:50:28.202323  744457 main.go:141] libmachine: (addons-395535) DBG | unable to find current IP address of domain addons-395535 in network mk-addons-395535 (interfaces detected: [])
	I1006 13:50:28.202347  744457 main.go:141] libmachine: (addons-395535) DBG | I1006 13:50:28.202288  744485 retry.go:31] will retry after 420.944938ms: waiting for domain to come up
	I1006 13:50:28.625144  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:50:28.625600  744457 main.go:141] libmachine: (addons-395535) DBG | no network interface addresses found for domain addons-395535 (source=lease)
	I1006 13:50:28.625635  744457 main.go:141] libmachine: (addons-395535) DBG | trying to list again with source=arp
	I1006 13:50:28.625906  744457 main.go:141] libmachine: (addons-395535) DBG | unable to find current IP address of domain addons-395535 in network mk-addons-395535 (interfaces detected: [])
	I1006 13:50:28.625934  744457 main.go:141] libmachine: (addons-395535) DBG | I1006 13:50:28.625859  744485 retry.go:31] will retry after 670.748903ms: waiting for domain to come up
	I1006 13:50:29.297943  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:50:29.298528  744457 main.go:141] libmachine: (addons-395535) DBG | no network interface addresses found for domain addons-395535 (source=lease)
	I1006 13:50:29.298553  744457 main.go:141] libmachine: (addons-395535) DBG | trying to list again with source=arp
	I1006 13:50:29.298924  744457 main.go:141] libmachine: (addons-395535) DBG | unable to find current IP address of domain addons-395535 in network mk-addons-395535 (interfaces detected: [])
	I1006 13:50:29.299003  744457 main.go:141] libmachine: (addons-395535) DBG | I1006 13:50:29.298921  744485 retry.go:31] will retry after 663.319377ms: waiting for domain to come up
	I1006 13:50:29.963909  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:50:29.964392  744457 main.go:141] libmachine: (addons-395535) DBG | no network interface addresses found for domain addons-395535 (source=lease)
	I1006 13:50:29.964425  744457 main.go:141] libmachine: (addons-395535) DBG | trying to list again with source=arp
	I1006 13:50:29.964710  744457 main.go:141] libmachine: (addons-395535) DBG | unable to find current IP address of domain addons-395535 in network mk-addons-395535 (interfaces detected: [])
	I1006 13:50:29.964742  744457 main.go:141] libmachine: (addons-395535) DBG | I1006 13:50:29.964692  744485 retry.go:31] will retry after 835.286867ms: waiting for domain to come up
	I1006 13:50:30.801613  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:50:30.802053  744457 main.go:141] libmachine: (addons-395535) DBG | no network interface addresses found for domain addons-395535 (source=lease)
	I1006 13:50:30.802080  744457 main.go:141] libmachine: (addons-395535) DBG | trying to list again with source=arp
	I1006 13:50:30.802330  744457 main.go:141] libmachine: (addons-395535) DBG | unable to find current IP address of domain addons-395535 in network mk-addons-395535 (interfaces detected: [])
	I1006 13:50:30.802354  744457 main.go:141] libmachine: (addons-395535) DBG | I1006 13:50:30.802309  744485 retry.go:31] will retry after 1.010752024s: waiting for domain to come up
	I1006 13:50:31.814639  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:50:31.815130  744457 main.go:141] libmachine: (addons-395535) DBG | no network interface addresses found for domain addons-395535 (source=lease)
	I1006 13:50:31.815161  744457 main.go:141] libmachine: (addons-395535) DBG | trying to list again with source=arp
	I1006 13:50:31.815447  744457 main.go:141] libmachine: (addons-395535) DBG | unable to find current IP address of domain addons-395535 in network mk-addons-395535 (interfaces detected: [])
	I1006 13:50:31.815496  744457 main.go:141] libmachine: (addons-395535) DBG | I1006 13:50:31.815416  744485 retry.go:31] will retry after 1.259544237s: waiting for domain to come up
	I1006 13:50:33.076296  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:50:33.076722  744457 main.go:141] libmachine: (addons-395535) DBG | no network interface addresses found for domain addons-395535 (source=lease)
	I1006 13:50:33.076752  744457 main.go:141] libmachine: (addons-395535) DBG | trying to list again with source=arp
	I1006 13:50:33.076982  744457 main.go:141] libmachine: (addons-395535) DBG | unable to find current IP address of domain addons-395535 in network mk-addons-395535 (interfaces detected: [])
	I1006 13:50:33.077008  744457 main.go:141] libmachine: (addons-395535) DBG | I1006 13:50:33.076960  744485 retry.go:31] will retry after 2.039476567s: waiting for domain to come up
	I1006 13:50:35.118893  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:50:35.119372  744457 main.go:141] libmachine: (addons-395535) DBG | no network interface addresses found for domain addons-395535 (source=lease)
	I1006 13:50:35.119406  744457 main.go:141] libmachine: (addons-395535) DBG | trying to list again with source=arp
	I1006 13:50:35.119619  744457 main.go:141] libmachine: (addons-395535) DBG | unable to find current IP address of domain addons-395535 in network mk-addons-395535 (interfaces detected: [])
	I1006 13:50:35.119687  744457 main.go:141] libmachine: (addons-395535) DBG | I1006 13:50:35.119623  744485 retry.go:31] will retry after 2.477782553s: waiting for domain to come up
	I1006 13:50:37.601168  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:50:37.601871  744457 main.go:141] libmachine: (addons-395535) DBG | no network interface addresses found for domain addons-395535 (source=lease)
	I1006 13:50:37.601899  744457 main.go:141] libmachine: (addons-395535) DBG | trying to list again with source=arp
	I1006 13:50:37.602095  744457 main.go:141] libmachine: (addons-395535) DBG | unable to find current IP address of domain addons-395535 in network mk-addons-395535 (interfaces detected: [])
	I1006 13:50:37.602141  744457 main.go:141] libmachine: (addons-395535) DBG | I1006 13:50:37.602084  744485 retry.go:31] will retry after 3.131426048s: waiting for domain to come up
	I1006 13:50:40.734901  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:50:40.735423  744457 main.go:141] libmachine: (addons-395535) DBG | no network interface addresses found for domain addons-395535 (source=lease)
	I1006 13:50:40.735450  744457 main.go:141] libmachine: (addons-395535) DBG | trying to list again with source=arp
	I1006 13:50:40.735688  744457 main.go:141] libmachine: (addons-395535) DBG | unable to find current IP address of domain addons-395535 in network mk-addons-395535 (interfaces detected: [])
	I1006 13:50:40.735741  744457 main.go:141] libmachine: (addons-395535) DBG | I1006 13:50:40.735679  744485 retry.go:31] will retry after 3.977823546s: waiting for domain to come up
	I1006 13:50:44.717938  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:50:44.718492  744457 main.go:141] libmachine: (addons-395535) found domain IP: 192.168.39.36
	I1006 13:50:44.718519  744457 main.go:141] libmachine: (addons-395535) reserving static IP address...
	I1006 13:50:44.718533  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has current primary IP address 192.168.39.36 and MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:50:44.718965  744457 main.go:141] libmachine: (addons-395535) DBG | unable to find host DHCP lease matching {name: "addons-395535", mac: "52:54:00:55:35:3c", ip: "192.168.39.36"} in network mk-addons-395535
	I1006 13:50:44.928694  744457 main.go:141] libmachine: (addons-395535) DBG | Getting to WaitForSSH function...
	I1006 13:50:44.928723  744457 main.go:141] libmachine: (addons-395535) reserved static IP address 192.168.39.36 for domain addons-395535
	I1006 13:50:44.928735  744457 main.go:141] libmachine: (addons-395535) waiting for SSH...
	I1006 13:50:44.931882  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:50:44.932257  744457 main.go:141] libmachine: (addons-395535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:35:3c", ip: ""} in network mk-addons-395535: {Iface:virbr1 ExpiryTime:2025-10-06 14:50:41 +0000 UTC Type:0 Mac:52:54:00:55:35:3c Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:minikube Clientid:01:52:54:00:55:35:3c}
	I1006 13:50:44.932291  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined IP address 192.168.39.36 and MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:50:44.932511  744457 main.go:141] libmachine: (addons-395535) DBG | Using SSH client type: external
	I1006 13:50:44.932539  744457 main.go:141] libmachine: (addons-395535) DBG | Using SSH private key: /home/jenkins/minikube-integration/21701-739942/.minikube/machines/addons-395535/id_rsa (-rw-------)
	I1006 13:50:44.932608  744457 main.go:141] libmachine: (addons-395535) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.36 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21701-739942/.minikube/machines/addons-395535/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1006 13:50:44.932632  744457 main.go:141] libmachine: (addons-395535) DBG | About to run SSH command:
	I1006 13:50:44.932644  744457 main.go:141] libmachine: (addons-395535) DBG | exit 0
	I1006 13:50:45.066293  744457 main.go:141] libmachine: (addons-395535) DBG | SSH cmd err, output: <nil>: 
	I1006 13:50:45.066649  744457 main.go:141] libmachine: (addons-395535) domain creation complete
	I1006 13:50:45.066987  744457 main.go:141] libmachine: (addons-395535) Calling .GetConfigRaw
	I1006 13:50:45.067679  744457 main.go:141] libmachine: (addons-395535) Calling .DriverName
	I1006 13:50:45.067917  744457 main.go:141] libmachine: (addons-395535) Calling .DriverName
	I1006 13:50:45.068096  744457 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1006 13:50:45.068119  744457 main.go:141] libmachine: (addons-395535) Calling .GetState
	I1006 13:50:45.069712  744457 main.go:141] libmachine: Detecting operating system of created instance...
	I1006 13:50:45.069731  744457 main.go:141] libmachine: Waiting for SSH to be available...
	I1006 13:50:45.069760  744457 main.go:141] libmachine: Getting to WaitForSSH function...
	I1006 13:50:45.069767  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHHostname
	I1006 13:50:45.072752  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:50:45.073212  744457 main.go:141] libmachine: (addons-395535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:35:3c", ip: ""} in network mk-addons-395535: {Iface:virbr1 ExpiryTime:2025-10-06 14:50:41 +0000 UTC Type:0 Mac:52:54:00:55:35:3c Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-395535 Clientid:01:52:54:00:55:35:3c}
	I1006 13:50:45.073244  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined IP address 192.168.39.36 and MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:50:45.073454  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHPort
	I1006 13:50:45.073709  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHKeyPath
	I1006 13:50:45.073903  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHKeyPath
	I1006 13:50:45.074025  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHUsername
	I1006 13:50:45.074239  744457 main.go:141] libmachine: Using SSH client type: native
	I1006 13:50:45.074464  744457 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I1006 13:50:45.074475  744457 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1006 13:50:45.184541  744457 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 13:50:45.184574  744457 main.go:141] libmachine: Detecting the provisioner...
	I1006 13:50:45.184583  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHHostname
	I1006 13:50:45.187753  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:50:45.188208  744457 main.go:141] libmachine: (addons-395535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:35:3c", ip: ""} in network mk-addons-395535: {Iface:virbr1 ExpiryTime:2025-10-06 14:50:41 +0000 UTC Type:0 Mac:52:54:00:55:35:3c Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-395535 Clientid:01:52:54:00:55:35:3c}
	I1006 13:50:45.188233  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined IP address 192.168.39.36 and MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:50:45.188409  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHPort
	I1006 13:50:45.188701  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHKeyPath
	I1006 13:50:45.188913  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHKeyPath
	I1006 13:50:45.189091  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHUsername
	I1006 13:50:45.189312  744457 main.go:141] libmachine: Using SSH client type: native
	I1006 13:50:45.189529  744457 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I1006 13:50:45.189540  744457 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1006 13:50:45.301612  744457 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I1006 13:50:45.301687  744457 main.go:141] libmachine: found compatible host: buildroot
	I1006 13:50:45.301696  744457 main.go:141] libmachine: Provisioning with buildroot...
	I1006 13:50:45.301707  744457 main.go:141] libmachine: (addons-395535) Calling .GetMachineName
	I1006 13:50:45.301979  744457 buildroot.go:166] provisioning hostname "addons-395535"
	I1006 13:50:45.302008  744457 main.go:141] libmachine: (addons-395535) Calling .GetMachineName
	I1006 13:50:45.302229  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHHostname
	I1006 13:50:45.305616  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:50:45.306110  744457 main.go:141] libmachine: (addons-395535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:35:3c", ip: ""} in network mk-addons-395535: {Iface:virbr1 ExpiryTime:2025-10-06 14:50:41 +0000 UTC Type:0 Mac:52:54:00:55:35:3c Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-395535 Clientid:01:52:54:00:55:35:3c}
	I1006 13:50:45.306161  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined IP address 192.168.39.36 and MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:50:45.306303  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHPort
	I1006 13:50:45.306531  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHKeyPath
	I1006 13:50:45.306768  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHKeyPath
	I1006 13:50:45.306978  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHUsername
	I1006 13:50:45.307258  744457 main.go:141] libmachine: Using SSH client type: native
	I1006 13:50:45.307474  744457 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I1006 13:50:45.307486  744457 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-395535 && echo "addons-395535" | sudo tee /etc/hostname
	I1006 13:50:45.437026  744457 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-395535
	
	I1006 13:50:45.437063  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHHostname
	I1006 13:50:45.440252  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:50:45.440731  744457 main.go:141] libmachine: (addons-395535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:35:3c", ip: ""} in network mk-addons-395535: {Iface:virbr1 ExpiryTime:2025-10-06 14:50:41 +0000 UTC Type:0 Mac:52:54:00:55:35:3c Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-395535 Clientid:01:52:54:00:55:35:3c}
	I1006 13:50:45.440763  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined IP address 192.168.39.36 and MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:50:45.440953  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHPort
	I1006 13:50:45.441198  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHKeyPath
	I1006 13:50:45.441423  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHKeyPath
	I1006 13:50:45.441623  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHUsername
	I1006 13:50:45.441843  744457 main.go:141] libmachine: Using SSH client type: native
	I1006 13:50:45.442106  744457 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I1006 13:50:45.442125  744457 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-395535' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-395535/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-395535' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 13:50:45.562708  744457 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 13:50:45.562744  744457 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21701-739942/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-739942/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-739942/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-739942/.minikube}
	I1006 13:50:45.562786  744457 buildroot.go:174] setting up certificates
	I1006 13:50:45.562814  744457 provision.go:84] configureAuth start
	I1006 13:50:45.562829  744457 main.go:141] libmachine: (addons-395535) Calling .GetMachineName
	I1006 13:50:45.563144  744457 main.go:141] libmachine: (addons-395535) Calling .GetIP
	I1006 13:50:45.566655  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:50:45.567049  744457 main.go:141] libmachine: (addons-395535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:35:3c", ip: ""} in network mk-addons-395535: {Iface:virbr1 ExpiryTime:2025-10-06 14:50:41 +0000 UTC Type:0 Mac:52:54:00:55:35:3c Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-395535 Clientid:01:52:54:00:55:35:3c}
	I1006 13:50:45.567073  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined IP address 192.168.39.36 and MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:50:45.567321  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHHostname
	I1006 13:50:45.570128  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:50:45.570543  744457 main.go:141] libmachine: (addons-395535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:35:3c", ip: ""} in network mk-addons-395535: {Iface:virbr1 ExpiryTime:2025-10-06 14:50:41 +0000 UTC Type:0 Mac:52:54:00:55:35:3c Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-395535 Clientid:01:52:54:00:55:35:3c}
	I1006 13:50:45.570570  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined IP address 192.168.39.36 and MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:50:45.570801  744457 provision.go:143] copyHostCerts
	I1006 13:50:45.570886  744457 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-739942/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-739942/.minikube/ca.pem (1078 bytes)
	I1006 13:50:45.571048  744457 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-739942/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-739942/.minikube/cert.pem (1123 bytes)
	I1006 13:50:45.571134  744457 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-739942/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-739942/.minikube/key.pem (1679 bytes)
	I1006 13:50:45.571201  744457 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-739942/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-739942/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-739942/.minikube/certs/ca-key.pem org=jenkins.addons-395535 san=[127.0.0.1 192.168.39.36 addons-395535 localhost minikube]
	I1006 13:50:45.994058  744457 provision.go:177] copyRemoteCerts
	I1006 13:50:45.994127  744457 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 13:50:45.994157  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHHostname
	I1006 13:50:45.997823  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:50:45.998168  744457 main.go:141] libmachine: (addons-395535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:35:3c", ip: ""} in network mk-addons-395535: {Iface:virbr1 ExpiryTime:2025-10-06 14:50:41 +0000 UTC Type:0 Mac:52:54:00:55:35:3c Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-395535 Clientid:01:52:54:00:55:35:3c}
	I1006 13:50:45.998203  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined IP address 192.168.39.36 and MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:50:45.998423  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHPort
	I1006 13:50:45.998710  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHKeyPath
	I1006 13:50:45.998893  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHUsername
	I1006 13:50:45.999112  744457 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/addons-395535/id_rsa Username:docker}
	I1006 13:50:46.087635  744457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1006 13:50:46.118825  744457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1006 13:50:46.149320  744457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1006 13:50:46.181288  744457 provision.go:87] duration metric: took 618.453057ms to configureAuth
	I1006 13:50:46.181320  744457 buildroot.go:189] setting minikube options for container-runtime
	I1006 13:50:46.181503  744457 config.go:182] Loaded profile config "addons-395535": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 13:50:46.181579  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHHostname
	I1006 13:50:46.184553  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:50:46.184999  744457 main.go:141] libmachine: (addons-395535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:35:3c", ip: ""} in network mk-addons-395535: {Iface:virbr1 ExpiryTime:2025-10-06 14:50:41 +0000 UTC Type:0 Mac:52:54:00:55:35:3c Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-395535 Clientid:01:52:54:00:55:35:3c}
	I1006 13:50:46.185024  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined IP address 192.168.39.36 and MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:50:46.185253  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHPort
	I1006 13:50:46.185545  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHKeyPath
	I1006 13:50:46.185775  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHKeyPath
	I1006 13:50:46.185971  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHUsername
	I1006 13:50:46.186260  744457 main.go:141] libmachine: Using SSH client type: native
	I1006 13:50:46.186491  744457 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I1006 13:50:46.186511  744457 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 13:50:46.436527  744457 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 13:50:46.436555  744457 main.go:141] libmachine: Checking connection to Docker...
	I1006 13:50:46.436564  744457 main.go:141] libmachine: (addons-395535) Calling .GetURL
	I1006 13:50:46.437835  744457 main.go:141] libmachine: (addons-395535) DBG | using libvirt version 8000000
	I1006 13:50:46.440168  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:50:46.440504  744457 main.go:141] libmachine: (addons-395535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:35:3c", ip: ""} in network mk-addons-395535: {Iface:virbr1 ExpiryTime:2025-10-06 14:50:41 +0000 UTC Type:0 Mac:52:54:00:55:35:3c Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-395535 Clientid:01:52:54:00:55:35:3c}
	I1006 13:50:46.440532  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined IP address 192.168.39.36 and MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:50:46.440744  744457 main.go:141] libmachine: Docker is up and running!
	I1006 13:50:46.440756  744457 main.go:141] libmachine: Reticulating splines...
	I1006 13:50:46.440765  744457 client.go:171] duration metric: took 21.017985972s to LocalClient.Create
	I1006 13:50:46.440798  744457 start.go:167] duration metric: took 21.018061229s to libmachine.API.Create "addons-395535"
	I1006 13:50:46.440811  744457 start.go:293] postStartSetup for "addons-395535" (driver="kvm2")
	I1006 13:50:46.440822  744457 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 13:50:46.440843  744457 main.go:141] libmachine: (addons-395535) Calling .DriverName
	I1006 13:50:46.441077  744457 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 13:50:46.441101  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHHostname
	I1006 13:50:46.443567  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:50:46.443963  744457 main.go:141] libmachine: (addons-395535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:35:3c", ip: ""} in network mk-addons-395535: {Iface:virbr1 ExpiryTime:2025-10-06 14:50:41 +0000 UTC Type:0 Mac:52:54:00:55:35:3c Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-395535 Clientid:01:52:54:00:55:35:3c}
	I1006 13:50:46.443989  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined IP address 192.168.39.36 and MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:50:46.444132  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHPort
	I1006 13:50:46.444319  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHKeyPath
	I1006 13:50:46.444485  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHUsername
	I1006 13:50:46.444681  744457 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/addons-395535/id_rsa Username:docker}
	I1006 13:50:46.530670  744457 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 13:50:46.536154  744457 info.go:137] Remote host: Buildroot 2025.02
	I1006 13:50:46.536182  744457 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-739942/.minikube/addons for local assets ...
	I1006 13:50:46.536274  744457 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-739942/.minikube/files for local assets ...
	I1006 13:50:46.536305  744457 start.go:296] duration metric: took 95.487061ms for postStartSetup
	I1006 13:50:46.536355  744457 main.go:141] libmachine: (addons-395535) Calling .GetConfigRaw
	I1006 13:50:46.536968  744457 main.go:141] libmachine: (addons-395535) Calling .GetIP
	I1006 13:50:46.540101  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:50:46.540550  744457 main.go:141] libmachine: (addons-395535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:35:3c", ip: ""} in network mk-addons-395535: {Iface:virbr1 ExpiryTime:2025-10-06 14:50:41 +0000 UTC Type:0 Mac:52:54:00:55:35:3c Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-395535 Clientid:01:52:54:00:55:35:3c}
	I1006 13:50:46.540580  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined IP address 192.168.39.36 and MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:50:46.540905  744457 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/addons-395535/config.json ...
	I1006 13:50:46.541121  744457 start.go:128] duration metric: took 21.136312059s to createHost
	I1006 13:50:46.541149  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHHostname
	I1006 13:50:46.543505  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:50:46.543876  744457 main.go:141] libmachine: (addons-395535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:35:3c", ip: ""} in network mk-addons-395535: {Iface:virbr1 ExpiryTime:2025-10-06 14:50:41 +0000 UTC Type:0 Mac:52:54:00:55:35:3c Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-395535 Clientid:01:52:54:00:55:35:3c}
	I1006 13:50:46.543906  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined IP address 192.168.39.36 and MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:50:46.544066  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHPort
	I1006 13:50:46.544259  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHKeyPath
	I1006 13:50:46.544419  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHKeyPath
	I1006 13:50:46.544597  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHUsername
	I1006 13:50:46.544755  744457 main.go:141] libmachine: Using SSH client type: native
	I1006 13:50:46.544958  744457 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I1006 13:50:46.544972  744457 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1006 13:50:46.656965  744457 main.go:141] libmachine: SSH cmd err, output: <nil>: 1759758646.620270968
	
	I1006 13:50:46.656993  744457 fix.go:216] guest clock: 1759758646.620270968
	I1006 13:50:46.657001  744457 fix.go:229] Guest: 2025-10-06 13:50:46.620270968 +0000 UTC Remote: 2025-10-06 13:50:46.541135853 +0000 UTC m=+21.259758758 (delta=79.135115ms)
	I1006 13:50:46.657040  744457 fix.go:200] guest clock delta is within tolerance: 79.135115ms
	I1006 13:50:46.657045  744457 start.go:83] releasing machines lock for "addons-395535", held for 21.252312953s
	I1006 13:50:46.657071  744457 main.go:141] libmachine: (addons-395535) Calling .DriverName
	I1006 13:50:46.657419  744457 main.go:141] libmachine: (addons-395535) Calling .GetIP
	I1006 13:50:46.660538  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:50:46.661008  744457 main.go:141] libmachine: (addons-395535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:35:3c", ip: ""} in network mk-addons-395535: {Iface:virbr1 ExpiryTime:2025-10-06 14:50:41 +0000 UTC Type:0 Mac:52:54:00:55:35:3c Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-395535 Clientid:01:52:54:00:55:35:3c}
	I1006 13:50:46.661038  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined IP address 192.168.39.36 and MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:50:46.661303  744457 main.go:141] libmachine: (addons-395535) Calling .DriverName
	I1006 13:50:46.661949  744457 main.go:141] libmachine: (addons-395535) Calling .DriverName
	I1006 13:50:46.662177  744457 main.go:141] libmachine: (addons-395535) Calling .DriverName
	I1006 13:50:46.662278  744457 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 13:50:46.662337  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHHostname
	I1006 13:50:46.662440  744457 ssh_runner.go:195] Run: cat /version.json
	I1006 13:50:46.662473  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHHostname
	I1006 13:50:46.665608  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:50:46.665879  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:50:46.666032  744457 main.go:141] libmachine: (addons-395535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:35:3c", ip: ""} in network mk-addons-395535: {Iface:virbr1 ExpiryTime:2025-10-06 14:50:41 +0000 UTC Type:0 Mac:52:54:00:55:35:3c Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-395535 Clientid:01:52:54:00:55:35:3c}
	I1006 13:50:46.666058  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined IP address 192.168.39.36 and MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:50:46.666257  744457 main.go:141] libmachine: (addons-395535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:35:3c", ip: ""} in network mk-addons-395535: {Iface:virbr1 ExpiryTime:2025-10-06 14:50:41 +0000 UTC Type:0 Mac:52:54:00:55:35:3c Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-395535 Clientid:01:52:54:00:55:35:3c}
	I1006 13:50:46.666288  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined IP address 192.168.39.36 and MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:50:46.666397  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHPort
	I1006 13:50:46.666626  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHPort
	I1006 13:50:46.666662  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHKeyPath
	I1006 13:50:46.666838  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHKeyPath
	I1006 13:50:46.666889  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHUsername
	I1006 13:50:46.667054  744457 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/addons-395535/id_rsa Username:docker}
	I1006 13:50:46.667143  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHUsername
	I1006 13:50:46.667349  744457 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/addons-395535/id_rsa Username:docker}
	I1006 13:50:46.771738  744457 ssh_runner.go:195] Run: systemctl --version
	I1006 13:50:46.778920  744457 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 13:50:46.942416  744457 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 13:50:46.950312  744457 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 13:50:46.950381  744457 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 13:50:46.972150  744457 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1006 13:50:46.972179  744457 start.go:495] detecting cgroup driver to use...
	I1006 13:50:46.972243  744457 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 13:50:46.992681  744457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 13:50:47.010596  744457 docker.go:218] disabling cri-docker service (if available) ...
	I1006 13:50:47.010658  744457 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 13:50:47.028474  744457 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 13:50:47.045792  744457 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 13:50:47.193899  744457 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 13:50:47.411116  744457 docker.go:234] disabling docker service ...
	I1006 13:50:47.411242  744457 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 13:50:47.431502  744457 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 13:50:47.448169  744457 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 13:50:47.609182  744457 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 13:50:47.755389  744457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 13:50:47.771914  744457 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 13:50:47.797531  744457 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1006 13:50:47.797612  744457 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 13:50:47.811759  744457 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1006 13:50:47.811826  744457 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 13:50:47.825818  744457 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 13:50:47.839435  744457 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 13:50:47.853278  744457 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 13:50:47.868150  744457 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 13:50:47.881897  744457 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 13:50:47.904354  744457 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 13:50:47.918171  744457 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 13:50:47.929737  744457 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1006 13:50:47.929796  744457 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1006 13:50:47.952339  744457 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 13:50:47.965905  744457 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 13:50:48.113030  744457 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 13:50:48.233801  744457 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 13:50:48.233896  744457 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 13:50:48.240155  744457 start.go:563] Will wait 60s for crictl version
	I1006 13:50:48.240241  744457 ssh_runner.go:195] Run: which crictl
	I1006 13:50:48.245054  744457 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1006 13:50:48.292373  744457 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1006 13:50:48.292485  744457 ssh_runner.go:195] Run: crio --version
	I1006 13:50:48.324522  744457 ssh_runner.go:195] Run: crio --version
	I1006 13:50:48.358640  744457 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1006 13:50:48.359885  744457 main.go:141] libmachine: (addons-395535) Calling .GetIP
	I1006 13:50:48.363354  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:50:48.363767  744457 main.go:141] libmachine: (addons-395535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:35:3c", ip: ""} in network mk-addons-395535: {Iface:virbr1 ExpiryTime:2025-10-06 14:50:41 +0000 UTC Type:0 Mac:52:54:00:55:35:3c Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-395535 Clientid:01:52:54:00:55:35:3c}
	I1006 13:50:48.363802  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined IP address 192.168.39.36 and MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:50:48.364179  744457 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1006 13:50:48.369708  744457 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 13:50:48.386936  744457 kubeadm.go:883] updating cluster {Name:addons-395535 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-395535 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.36 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 13:50:48.387168  744457 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 13:50:48.387236  744457 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 13:50:48.426297  744457 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1006 13:50:48.426373  744457 ssh_runner.go:195] Run: which lz4
	I1006 13:50:48.431249  744457 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1006 13:50:48.436693  744457 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1006 13:50:48.436744  744457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1006 13:50:50.044318  744457 crio.go:462] duration metric: took 1.613093244s to copy over tarball
	I1006 13:50:50.044413  744457 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1006 13:50:51.767064  744457 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.722616851s)
	I1006 13:50:51.767097  744457 crio.go:469] duration metric: took 1.72274147s to extract the tarball
	I1006 13:50:51.767108  744457 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1006 13:50:51.809752  744457 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 13:50:51.859812  744457 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 13:50:51.859838  744457 cache_images.go:85] Images are preloaded, skipping loading
	I1006 13:50:51.859846  744457 kubeadm.go:934] updating node { 192.168.39.36 8443 v1.34.1 crio true true} ...
	I1006 13:50:51.859964  744457 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-395535 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.36
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-395535 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 13:50:51.860057  744457 ssh_runner.go:195] Run: crio config
	I1006 13:50:51.909241  744457 cni.go:84] Creating CNI manager for ""
	I1006 13:50:51.909272  744457 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1006 13:50:51.909295  744457 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 13:50:51.909329  744457 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.36 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-395535 NodeName:addons-395535 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.36"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.36 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 13:50:51.909502  744457 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.36
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-395535"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.36"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.36"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 13:50:51.909601  744457 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 13:50:51.923675  744457 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 13:50:51.923755  744457 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 13:50:51.937163  744457 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1006 13:50:51.959428  744457 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 13:50:51.983075  744457 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1006 13:50:52.006691  744457 ssh_runner.go:195] Run: grep 192.168.39.36	control-plane.minikube.internal$ /etc/hosts
	I1006 13:50:52.011663  744457 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.36	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 13:50:52.028667  744457 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 13:50:52.182843  744457 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 13:50:52.226115  744457 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/addons-395535 for IP: 192.168.39.36
	I1006 13:50:52.226141  744457 certs.go:195] generating shared ca certs ...
	I1006 13:50:52.226164  744457 certs.go:227] acquiring lock for ca certs: {Name:mkac26b60e1fd10143a5d4dc5ca0de64e9dd4f98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 13:50:52.226341  744457 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-739942/.minikube/ca.key
	I1006 13:50:52.304281  744457 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-739942/.minikube/ca.crt ...
	I1006 13:50:52.304314  744457 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-739942/.minikube/ca.crt: {Name:mk0170a17aa49090e037ce0edeeb02f101a3f5d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 13:50:52.304516  744457 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-739942/.minikube/ca.key ...
	I1006 13:50:52.304532  744457 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-739942/.minikube/ca.key: {Name:mk7ee8f8ec965a91524d5a3abf39e9970b8acba6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 13:50:52.304663  744457 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-739942/.minikube/proxy-client-ca.key
	I1006 13:50:52.483041  744457 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-739942/.minikube/proxy-client-ca.crt ...
	I1006 13:50:52.483083  744457 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-739942/.minikube/proxy-client-ca.crt: {Name:mk1cb4a64fca5a1f78c1920cf2098bdfc9c12f95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 13:50:52.483291  744457 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-739942/.minikube/proxy-client-ca.key ...
	I1006 13:50:52.483322  744457 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-739942/.minikube/proxy-client-ca.key: {Name:mk3b6d7444fe067822944e2593118b5d62b268ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 13:50:52.483429  744457 certs.go:257] generating profile certs ...
	I1006 13:50:52.483513  744457 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/addons-395535/client.key
	I1006 13:50:52.483549  744457 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/addons-395535/client.crt with IP's: []
	I1006 13:50:52.825366  744457 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/addons-395535/client.crt ...
	I1006 13:50:52.825412  744457 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/addons-395535/client.crt: {Name:mkf4c1676bbd08678e95709f9ba3c8813f8c0003 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 13:50:52.825651  744457 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/addons-395535/client.key ...
	I1006 13:50:52.825670  744457 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/addons-395535/client.key: {Name:mk7ebb1713c83fbeac38ec7b6401ec040671d450 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 13:50:52.825782  744457 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/addons-395535/apiserver.key.ac24d445
	I1006 13:50:52.825807  744457 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/addons-395535/apiserver.crt.ac24d445 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.36]
	I1006 13:50:52.944167  744457 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/addons-395535/apiserver.crt.ac24d445 ...
	I1006 13:50:52.944205  744457 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/addons-395535/apiserver.crt.ac24d445: {Name:mk9d36708e99356a816707a02acabfc290053c48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 13:50:52.944404  744457 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/addons-395535/apiserver.key.ac24d445 ...
	I1006 13:50:52.944433  744457 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/addons-395535/apiserver.key.ac24d445: {Name:mkd27540446f74cb8c3af9302adbab0d7e03de3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 13:50:52.944543  744457 certs.go:382] copying /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/addons-395535/apiserver.crt.ac24d445 -> /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/addons-395535/apiserver.crt
	I1006 13:50:52.944667  744457 certs.go:386] copying /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/addons-395535/apiserver.key.ac24d445 -> /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/addons-395535/apiserver.key
	I1006 13:50:52.944747  744457 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/addons-395535/proxy-client.key
	I1006 13:50:52.944775  744457 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/addons-395535/proxy-client.crt with IP's: []
	I1006 13:50:53.126758  744457 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/addons-395535/proxy-client.crt ...
	I1006 13:50:53.126792  744457 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/addons-395535/proxy-client.crt: {Name:mkbd18f362af11c2ddcbdd342a32dde7f1a4e6b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 13:50:53.126986  744457 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/addons-395535/proxy-client.key ...
	I1006 13:50:53.127004  744457 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/addons-395535/proxy-client.key: {Name:mkfb4de14930c478e904ab88efb54aac2ebb12fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 13:50:53.127223  744457 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-739942/.minikube/certs/ca-key.pem (1679 bytes)
	I1006 13:50:53.127271  744457 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-739942/.minikube/certs/ca.pem (1078 bytes)
	I1006 13:50:53.127302  744457 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-739942/.minikube/certs/cert.pem (1123 bytes)
	I1006 13:50:53.127338  744457 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-739942/.minikube/certs/key.pem (1679 bytes)
	I1006 13:50:53.127982  744457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 13:50:53.176445  744457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1006 13:50:53.217370  744457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 13:50:53.252007  744457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1006 13:50:53.285367  744457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/addons-395535/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1006 13:50:53.322332  744457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/addons-395535/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1006 13:50:53.356265  744457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/addons-395535/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 13:50:53.390222  744457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/addons-395535/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1006 13:50:53.423758  744457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 13:50:53.459077  744457 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 13:50:53.482251  744457 ssh_runner.go:195] Run: openssl version
	I1006 13:50:53.489337  744457 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 13:50:53.504641  744457 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 13:50:53.510660  744457 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 13:50 /usr/share/ca-certificates/minikubeCA.pem
	I1006 13:50:53.510726  744457 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 13:50:53.519082  744457 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 13:50:53.534227  744457 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 13:50:53.539569  744457 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1006 13:50:53.539657  744457 kubeadm.go:400] StartCluster: {Name:addons-395535 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-395535 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.36 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 13:50:53.539744  744457 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 13:50:53.539795  744457 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 13:50:53.583825  744457 cri.go:89] found id: ""
	I1006 13:50:53.583898  744457 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 13:50:53.597848  744457 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1006 13:50:53.611919  744457 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 13:50:53.625333  744457 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 13:50:53.625364  744457 kubeadm.go:157] found existing configuration files:
	
	I1006 13:50:53.625422  744457 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1006 13:50:53.637839  744457 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 13:50:53.637919  744457 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 13:50:53.651260  744457 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1006 13:50:53.665873  744457 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 13:50:53.665948  744457 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 13:50:53.679748  744457 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1006 13:50:53.692133  744457 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 13:50:53.692208  744457 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 13:50:53.711830  744457 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1006 13:50:53.727234  744457 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 13:50:53.727336  744457 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 13:50:53.741292  744457 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1006 13:50:53.939909  744457 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 13:51:05.968487  744457 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1006 13:51:05.968546  744457 kubeadm.go:318] [preflight] Running pre-flight checks
	I1006 13:51:05.968698  744457 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 13:51:05.968828  744457 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 13:51:05.968953  744457 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1006 13:51:05.969079  744457 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1006 13:51:05.970961  744457 out.go:252]   - Generating certificates and keys ...
	I1006 13:51:05.971088  744457 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1006 13:51:05.971162  744457 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1006 13:51:05.971267  744457 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1006 13:51:05.971347  744457 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1006 13:51:05.971451  744457 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1006 13:51:05.971533  744457 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1006 13:51:05.971623  744457 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1006 13:51:05.971782  744457 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-395535 localhost] and IPs [192.168.39.36 127.0.0.1 ::1]
	I1006 13:51:05.971850  744457 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1006 13:51:05.971956  744457 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-395535 localhost] and IPs [192.168.39.36 127.0.0.1 ::1]
	I1006 13:51:05.972018  744457 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1006 13:51:05.972080  744457 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1006 13:51:05.972120  744457 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1006 13:51:05.972173  744457 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 13:51:05.972217  744457 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 13:51:05.972268  744457 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1006 13:51:05.972320  744457 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 13:51:05.972378  744457 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 13:51:05.972432  744457 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 13:51:05.972509  744457 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 13:51:05.972596  744457 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 13:51:05.974122  744457 out.go:252]   - Booting up control plane ...
	I1006 13:51:05.974254  744457 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 13:51:05.974379  744457 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 13:51:05.974468  744457 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 13:51:05.974633  744457 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 13:51:05.974759  744457 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1006 13:51:05.974890  744457 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1006 13:51:05.975012  744457 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 13:51:05.975074  744457 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1006 13:51:05.975248  744457 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1006 13:51:05.975394  744457 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1006 13:51:05.975487  744457 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001801598s
	I1006 13:51:05.975633  744457 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1006 13:51:05.975746  744457 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.39.36:8443/livez
	I1006 13:51:05.975863  744457 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1006 13:51:05.975974  744457 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1006 13:51:05.976078  744457 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.1158023s
	I1006 13:51:05.976175  744457 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.606885288s
	I1006 13:51:05.976271  744457 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 5.505401382s
	I1006 13:51:05.976410  744457 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1006 13:51:05.976520  744457 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1006 13:51:05.976578  744457 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1006 13:51:05.976781  744457 kubeadm.go:318] [mark-control-plane] Marking the node addons-395535 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1006 13:51:05.976870  744457 kubeadm.go:318] [bootstrap-token] Using token: dc1ufj.qe7ts5fxbf116xs6
	I1006 13:51:05.978544  744457 out.go:252]   - Configuring RBAC rules ...
	I1006 13:51:05.978682  744457 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1006 13:51:05.978783  744457 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1006 13:51:05.978943  744457 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1006 13:51:05.979076  744457 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1006 13:51:05.979176  744457 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1006 13:51:05.979281  744457 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1006 13:51:05.979434  744457 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1006 13:51:05.979500  744457 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1006 13:51:05.979576  744457 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1006 13:51:05.979600  744457 kubeadm.go:318] 
	I1006 13:51:05.979698  744457 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1006 13:51:05.979713  744457 kubeadm.go:318] 
	I1006 13:51:05.979801  744457 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1006 13:51:05.979812  744457 kubeadm.go:318] 
	I1006 13:51:05.979858  744457 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1006 13:51:05.979946  744457 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1006 13:51:05.980024  744457 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1006 13:51:05.980033  744457 kubeadm.go:318] 
	I1006 13:51:05.980116  744457 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1006 13:51:05.980124  744457 kubeadm.go:318] 
	I1006 13:51:05.980164  744457 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1006 13:51:05.980169  744457 kubeadm.go:318] 
	I1006 13:51:05.980214  744457 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1006 13:51:05.980279  744457 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1006 13:51:05.980342  744457 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1006 13:51:05.980348  744457 kubeadm.go:318] 
	I1006 13:51:05.980421  744457 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1006 13:51:05.980566  744457 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1006 13:51:05.980600  744457 kubeadm.go:318] 
	I1006 13:51:05.980733  744457 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token dc1ufj.qe7ts5fxbf116xs6 \
	I1006 13:51:05.980880  744457 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:a76a02afcbd435b1bfb3f09dd3efd8140ae0e58303b74568634be35c7685a93e \
	I1006 13:51:05.980914  744457 kubeadm.go:318] 	--control-plane 
	I1006 13:51:05.980924  744457 kubeadm.go:318] 
	I1006 13:51:05.981039  744457 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1006 13:51:05.981051  744457 kubeadm.go:318] 
	I1006 13:51:05.981140  744457 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token dc1ufj.qe7ts5fxbf116xs6 \
	I1006 13:51:05.981254  744457 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:a76a02afcbd435b1bfb3f09dd3efd8140ae0e58303b74568634be35c7685a93e 
	I1006 13:51:05.981276  744457 cni.go:84] Creating CNI manager for ""
	I1006 13:51:05.981288  744457 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1006 13:51:05.983040  744457 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1006 13:51:05.984140  744457 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1006 13:51:06.002694  744457 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1006 13:51:06.035304  744457 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1006 13:51:06.035464  744457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 13:51:06.035469  744457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-395535 minikube.k8s.io/updated_at=2025_10_06T13_51_06_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=512f8c40caa35ce8d82a76bc06907e2d11c89c81 minikube.k8s.io/name=addons-395535 minikube.k8s.io/primary=true
	I1006 13:51:06.094192  744457 ops.go:34] apiserver oom_adj: -16
	I1006 13:51:06.228854  744457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 13:51:06.729897  744457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 13:51:07.229138  744457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 13:51:07.729741  744457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 13:51:08.229573  744457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 13:51:08.729831  744457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 13:51:09.229799  744457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 13:51:09.729834  744457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 13:51:10.229362  744457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 13:51:10.375720  744457 kubeadm.go:1113] duration metric: took 4.340334259s to wait for elevateKubeSystemPrivileges
	I1006 13:51:10.375765  744457 kubeadm.go:402] duration metric: took 16.836113098s to StartCluster
	I1006 13:51:10.375790  744457 settings.go:142] acquiring lock: {Name:mk95ac14a932277c5d6f71123bdccb175d870212 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 13:51:10.375917  744457 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21701-739942/kubeconfig
	I1006 13:51:10.376437  744457 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-739942/kubeconfig: {Name:mkb3c6455f820b9fd25629981fabc6cb3d63fb4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 13:51:10.376715  744457 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1006 13:51:10.376731  744457 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1006 13:51:10.376871  744457 addons.go:69] Setting yakd=true in profile "addons-395535"
	I1006 13:51:10.376896  744457 addons.go:238] Setting addon yakd=true in "addons-395535"
	I1006 13:51:10.376895  744457 addons.go:69] Setting inspektor-gadget=true in profile "addons-395535"
	I1006 13:51:10.376918  744457 addons.go:238] Setting addon inspektor-gadget=true in "addons-395535"
	I1006 13:51:10.376925  744457 config.go:182] Loaded profile config "addons-395535": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 13:51:10.376940  744457 host.go:66] Checking if "addons-395535" exists ...
	I1006 13:51:10.376932  744457 addons.go:69] Setting registry-creds=true in profile "addons-395535"
	I1006 13:51:10.376949  744457 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-395535"
	I1006 13:51:10.376931  744457 host.go:66] Checking if "addons-395535" exists ...
	I1006 13:51:10.376970  744457 addons.go:238] Setting addon registry-creds=true in "addons-395535"
	I1006 13:51:10.376972  744457 addons.go:69] Setting volcano=true in profile "addons-395535"
	I1006 13:51:10.377013  744457 host.go:66] Checking if "addons-395535" exists ...
	I1006 13:51:10.377018  744457 addons.go:238] Setting addon volcano=true in "addons-395535"
	I1006 13:51:10.377039  744457 host.go:66] Checking if "addons-395535" exists ...
	I1006 13:51:10.377202  744457 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-395535"
	I1006 13:51:10.377228  744457 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-395535"
	I1006 13:51:10.377255  744457 host.go:66] Checking if "addons-395535" exists ...
	I1006 13:51:10.377384  744457 addons.go:69] Setting volumesnapshots=true in profile "addons-395535"
	I1006 13:51:10.377395  744457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 13:51:10.377404  744457 addons.go:238] Setting addon volumesnapshots=true in "addons-395535"
	I1006 13:51:10.377409  744457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 13:51:10.377419  744457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 13:51:10.377421  744457 host.go:66] Checking if "addons-395535" exists ...
	I1006 13:51:10.377431  744457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 13:51:10.377426  744457 addons.go:69] Setting metrics-server=true in profile "addons-395535"
	I1006 13:51:10.377454  744457 addons.go:238] Setting addon metrics-server=true in "addons-395535"
	I1006 13:51:10.377466  744457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 13:51:10.377486  744457 addons.go:69] Setting registry=true in profile "addons-395535"
	I1006 13:51:10.377496  744457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 13:51:10.377537  744457 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-395535"
	I1006 13:51:10.377829  744457 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-395535"
	I1006 13:51:10.377867  744457 host.go:66] Checking if "addons-395535" exists ...
	I1006 13:51:10.376940  744457 addons.go:69] Setting storage-provisioner=true in profile "addons-395535"
	I1006 13:51:10.377899  744457 addons.go:238] Setting addon storage-provisioner=true in "addons-395535"
	I1006 13:51:10.377928  744457 host.go:66] Checking if "addons-395535" exists ...
	I1006 13:51:10.376708  744457 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.36 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 13:51:10.377395  744457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 13:51:10.378288  744457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 13:51:10.378298  744457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 13:51:10.377498  744457 addons.go:238] Setting addon registry=true in "addons-395535"
	I1006 13:51:10.378345  744457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 13:51:10.378356  744457 host.go:66] Checking if "addons-395535" exists ...
	I1006 13:51:10.378416  744457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 13:51:10.378430  744457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 13:51:10.376964  744457 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-395535"
	I1006 13:51:10.379216  744457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 13:51:10.379275  744457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 13:51:10.377512  744457 addons.go:69] Setting default-storageclass=true in profile "addons-395535"
	I1006 13:51:10.379699  744457 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-395535"
	I1006 13:51:10.380183  744457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 13:51:10.380241  744457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 13:51:10.377519  744457 addons.go:69] Setting gcp-auth=true in profile "addons-395535"
	I1006 13:51:10.382202  744457 mustload.go:65] Loading cluster: addons-395535
	I1006 13:51:10.377521  744457 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-395535"
	I1006 13:51:10.382667  744457 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-395535"
	I1006 13:51:10.382713  744457 host.go:66] Checking if "addons-395535" exists ...
	I1006 13:51:10.383135  744457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 13:51:10.383158  744457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 13:51:10.377524  744457 addons.go:69] Setting ingress=true in profile "addons-395535"
	I1006 13:51:10.383627  744457 addons.go:238] Setting addon ingress=true in "addons-395535"
	I1006 13:51:10.383668  744457 host.go:66] Checking if "addons-395535" exists ...
	I1006 13:51:10.384075  744457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 13:51:10.384096  744457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 13:51:10.377529  744457 addons.go:69] Setting ingress-dns=true in profile "addons-395535"
	I1006 13:51:10.384322  744457 addons.go:238] Setting addon ingress-dns=true in "addons-395535"
	I1006 13:51:10.384356  744457 host.go:66] Checking if "addons-395535" exists ...
	I1006 13:51:10.385673  744457 out.go:179] * Verifying Kubernetes components...
	I1006 13:51:10.377531  744457 addons.go:69] Setting cloud-spanner=true in profile "addons-395535"
	I1006 13:51:10.385743  744457 addons.go:238] Setting addon cloud-spanner=true in "addons-395535"
	I1006 13:51:10.385884  744457 host.go:66] Checking if "addons-395535" exists ...
	I1006 13:51:10.386629  744457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 13:51:10.386723  744457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 13:51:10.387678  744457 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 13:51:10.377706  744457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 13:51:10.387918  744457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 13:51:10.377758  744457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 13:51:10.388744  744457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 13:51:10.378788  744457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 13:51:10.388828  744457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 13:51:10.377639  744457 host.go:66] Checking if "addons-395535" exists ...
	I1006 13:51:10.393176  744457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 13:51:10.393272  744457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 13:51:10.395240  744457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 13:51:10.395328  744457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 13:51:10.395866  744457 config.go:182] Loaded profile config "addons-395535": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 13:51:10.396230  744457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 13:51:10.396284  744457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 13:51:10.402936  744457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36271
	I1006 13:51:10.403255  744457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46151
	I1006 13:51:10.403924  744457 main.go:141] libmachine: () Calling .GetVersion
	I1006 13:51:10.404030  744457 main.go:141] libmachine: () Calling .GetVersion
	I1006 13:51:10.404735  744457 main.go:141] libmachine: Using API Version  1
	I1006 13:51:10.404757  744457 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 13:51:10.404899  744457 main.go:141] libmachine: Using API Version  1
	I1006 13:51:10.404916  744457 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 13:51:10.405677  744457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42315
	I1006 13:51:10.405857  744457 main.go:141] libmachine: () Calling .GetMachineName
	I1006 13:51:10.406435  744457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 13:51:10.406459  744457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 13:51:10.407554  744457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38849
	I1006 13:51:10.409258  744457 main.go:141] libmachine: () Calling .GetVersion
	I1006 13:51:10.411033  744457 main.go:141] libmachine: () Calling .GetMachineName
	I1006 13:51:10.411215  744457 main.go:141] libmachine: Using API Version  1
	I1006 13:51:10.411231  744457 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 13:51:10.411727  744457 main.go:141] libmachine: () Calling .GetMachineName
	I1006 13:51:10.411864  744457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 13:51:10.411906  744457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 13:51:10.411928  744457 main.go:141] libmachine: (addons-395535) Calling .GetState
	I1006 13:51:10.415164  744457 main.go:141] libmachine: () Calling .GetVersion
	I1006 13:51:10.418853  744457 main.go:141] libmachine: Using API Version  1
	I1006 13:51:10.418884  744457 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 13:51:10.418970  744457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34707
	I1006 13:51:10.419349  744457 main.go:141] libmachine: () Calling .GetMachineName
	I1006 13:51:10.420030  744457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 13:51:10.420079  744457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 13:51:10.426081  744457 main.go:141] libmachine: () Calling .GetVersion
	I1006 13:51:10.426230  744457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37073
	I1006 13:51:10.426956  744457 main.go:141] libmachine: Using API Version  1
	I1006 13:51:10.426976  744457 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 13:51:10.427712  744457 main.go:141] libmachine: () Calling .GetMachineName
	I1006 13:51:10.428377  744457 addons.go:238] Setting addon default-storageclass=true in "addons-395535"
	I1006 13:51:10.428423  744457 host.go:66] Checking if "addons-395535" exists ...
	I1006 13:51:10.428845  744457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 13:51:10.428869  744457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 13:51:10.430068  744457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 13:51:10.430106  744457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 13:51:10.430316  744457 main.go:141] libmachine: () Calling .GetVersion
	I1006 13:51:10.430514  744457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37945
	I1006 13:51:10.435193  744457 main.go:141] libmachine: Using API Version  1
	I1006 13:51:10.435275  744457 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 13:51:10.435384  744457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38339
	I1006 13:51:10.436189  744457 main.go:141] libmachine: () Calling .GetMachineName
	I1006 13:51:10.436554  744457 main.go:141] libmachine: (addons-395535) Calling .GetState
	I1006 13:51:10.436654  744457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33717
	I1006 13:51:10.438838  744457 main.go:141] libmachine: () Calling .GetVersion
	I1006 13:51:10.439210  744457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45057
	I1006 13:51:10.439551  744457 main.go:141] libmachine: Using API Version  1
	I1006 13:51:10.439567  744457 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 13:51:10.439708  744457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46635
	I1006 13:51:10.440347  744457 main.go:141] libmachine: () Calling .GetVersion
	I1006 13:51:10.440409  744457 main.go:141] libmachine: () Calling .GetVersion
	I1006 13:51:10.440462  744457 main.go:141] libmachine: () Calling .GetVersion
	I1006 13:51:10.441036  744457 main.go:141] libmachine: () Calling .GetMachineName
	I1006 13:51:10.441332  744457 main.go:141] libmachine: Using API Version  1
	I1006 13:51:10.441350  744457 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 13:51:10.441738  744457 main.go:141] libmachine: () Calling .GetMachineName
	I1006 13:51:10.441818  744457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 13:51:10.441873  744457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 13:51:10.442247  744457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 13:51:10.442284  744457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 13:51:10.442501  744457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44443
	I1006 13:51:10.443064  744457 main.go:141] libmachine: Using API Version  1
	I1006 13:51:10.443081  744457 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 13:51:10.443522  744457 main.go:141] libmachine: () Calling .GetMachineName
	I1006 13:51:10.444985  744457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45169
	I1006 13:51:10.445992  744457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38759
	I1006 13:51:10.446667  744457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 13:51:10.446709  744457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 13:51:10.448103  744457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34409
	I1006 13:51:10.448387  744457 main.go:141] libmachine: Using API Version  1
	I1006 13:51:10.448403  744457 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 13:51:10.449306  744457 main.go:141] libmachine: () Calling .GetMachineName
	I1006 13:51:10.449413  744457 main.go:141] libmachine: () Calling .GetVersion
	I1006 13:51:10.449525  744457 main.go:141] libmachine: () Calling .GetVersion
	I1006 13:51:10.450162  744457 main.go:141] libmachine: Using API Version  1
	I1006 13:51:10.450183  744457 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 13:51:10.450402  744457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40325
	I1006 13:51:10.450533  744457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 13:51:10.450566  744457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 13:51:10.450879  744457 main.go:141] libmachine: Using API Version  1
	I1006 13:51:10.450897  744457 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 13:51:10.451093  744457 main.go:141] libmachine: () Calling .GetVersion
	I1006 13:51:10.451192  744457 main.go:141] libmachine: () Calling .GetMachineName
	I1006 13:51:10.451402  744457 main.go:141] libmachine: (addons-395535) Calling .GetState
	I1006 13:51:10.451641  744457 main.go:141] libmachine: () Calling .GetMachineName
	I1006 13:51:10.452386  744457 main.go:141] libmachine: () Calling .GetVersion
	I1006 13:51:10.452520  744457 main.go:141] libmachine: () Calling .GetVersion
	I1006 13:51:10.452955  744457 main.go:141] libmachine: Using API Version  1
	I1006 13:51:10.452963  744457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 13:51:10.452974  744457 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 13:51:10.453103  744457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 13:51:10.453171  744457 main.go:141] libmachine: Using API Version  1
	I1006 13:51:10.453247  744457 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 13:51:10.453205  744457 main.go:141] libmachine: () Calling .GetVersion
	I1006 13:51:10.453661  744457 main.go:141] libmachine: () Calling .GetMachineName
	I1006 13:51:10.453912  744457 main.go:141] libmachine: Using API Version  1
	I1006 13:51:10.453941  744457 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 13:51:10.454283  744457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 13:51:10.454345  744457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 13:51:10.454660  744457 host.go:66] Checking if "addons-395535" exists ...
	I1006 13:51:10.454675  744457 main.go:141] libmachine: () Calling .GetMachineName
	I1006 13:51:10.455059  744457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 13:51:10.455088  744457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 13:51:10.455391  744457 main.go:141] libmachine: () Calling .GetMachineName
	I1006 13:51:10.455632  744457 main.go:141] libmachine: (addons-395535) Calling .GetState
	I1006 13:51:10.456067  744457 main.go:141] libmachine: (addons-395535) Calling .GetState
	I1006 13:51:10.456490  744457 main.go:141] libmachine: Using API Version  1
	I1006 13:51:10.456776  744457 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 13:51:10.457389  744457 main.go:141] libmachine: () Calling .GetMachineName
	I1006 13:51:10.458322  744457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45409
	I1006 13:51:10.459057  744457 main.go:141] libmachine: () Calling .GetVersion
	I1006 13:51:10.460478  744457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 13:51:10.460861  744457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 13:51:10.460749  744457 main.go:141] libmachine: Using API Version  1
	I1006 13:51:10.461097  744457 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 13:51:10.460784  744457 main.go:141] libmachine: (addons-395535) Calling .DriverName
	I1006 13:51:10.461473  744457 main.go:141] libmachine: Making call to close driver server
	I1006 13:51:10.461497  744457 main.go:141] libmachine: (addons-395535) Calling .Close
	I1006 13:51:10.463706  744457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42257
	I1006 13:51:10.463871  744457 main.go:141] libmachine: (addons-395535) DBG | Closing plugin on server side
	I1006 13:51:10.463905  744457 main.go:141] libmachine: Successfully made call to close driver server
	I1006 13:51:10.463912  744457 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 13:51:10.463921  744457 main.go:141] libmachine: Making call to close driver server
	I1006 13:51:10.463928  744457 main.go:141] libmachine: (addons-395535) Calling .Close
	I1006 13:51:10.464019  744457 main.go:141] libmachine: () Calling .GetMachineName
	I1006 13:51:10.464168  744457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43033
	I1006 13:51:10.464747  744457 main.go:141] libmachine: () Calling .GetVersion
	I1006 13:51:10.466410  744457 main.go:141] libmachine: Using API Version  1
	I1006 13:51:10.466431  744457 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 13:51:10.466887  744457 main.go:141] libmachine: () Calling .GetMachineName
	I1006 13:51:10.467309  744457 main.go:141] libmachine: (addons-395535) Calling .GetState
	I1006 13:51:10.468671  744457 main.go:141] libmachine: (addons-395535) DBG | Closing plugin on server side
	I1006 13:51:10.468710  744457 main.go:141] libmachine: Successfully made call to close driver server
	I1006 13:51:10.468717  744457 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 13:51:10.469330  744457 main.go:141] libmachine: (addons-395535) Calling .DriverName
	I1006 13:51:10.469420  744457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	W1006 13:51:10.469442  744457 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1006 13:51:10.469504  744457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 13:51:10.470808  744457 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-395535"
	I1006 13:51:10.470858  744457 host.go:66] Checking if "addons-395535" exists ...
	I1006 13:51:10.471309  744457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 13:51:10.471347  744457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 13:51:10.473401  744457 main.go:141] libmachine: () Calling .GetVersion
	I1006 13:51:10.473825  744457 main.go:141] libmachine: (addons-395535) Calling .DriverName
	I1006 13:51:10.473914  744457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45919
	I1006 13:51:10.474480  744457 main.go:141] libmachine: Using API Version  1
	I1006 13:51:10.474515  744457 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 13:51:10.475109  744457 main.go:141] libmachine: () Calling .GetMachineName
	I1006 13:51:10.475762  744457 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1006 13:51:10.475959  744457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 13:51:10.476012  744457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 13:51:10.476202  744457 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1006 13:51:10.477541  744457 main.go:141] libmachine: () Calling .GetVersion
	I1006 13:51:10.477660  744457 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 13:51:10.477675  744457 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1006 13:51:10.477695  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHHostname
	I1006 13:51:10.478427  744457 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1006 13:51:10.478442  744457 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1006 13:51:10.478463  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHHostname
	I1006 13:51:10.478463  744457 main.go:141] libmachine: Using API Version  1
	I1006 13:51:10.478477  744457 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 13:51:10.479108  744457 main.go:141] libmachine: () Calling .GetMachineName
	I1006 13:51:10.480913  744457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35099
	I1006 13:51:10.482031  744457 main.go:141] libmachine: () Calling .GetVersion
	I1006 13:51:10.483117  744457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 13:51:10.483156  744457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 13:51:10.484312  744457 main.go:141] libmachine: Using API Version  1
	I1006 13:51:10.484328  744457 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 13:51:10.485080  744457 main.go:141] libmachine: () Calling .GetMachineName
	I1006 13:51:10.485794  744457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 13:51:10.485955  744457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 13:51:10.486135  744457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40083
	I1006 13:51:10.486953  744457 main.go:141] libmachine: () Calling .GetVersion
	I1006 13:51:10.487446  744457 main.go:141] libmachine: Using API Version  1
	I1006 13:51:10.487464  744457 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 13:51:10.487938  744457 main.go:141] libmachine: () Calling .GetMachineName
	I1006 13:51:10.488392  744457 main.go:141] libmachine: (addons-395535) Calling .GetState
	I1006 13:51:10.488693  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:51:10.490023  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHPort
	I1006 13:51:10.490122  744457 main.go:141] libmachine: (addons-395535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:35:3c", ip: ""} in network mk-addons-395535: {Iface:virbr1 ExpiryTime:2025-10-06 14:50:41 +0000 UTC Type:0 Mac:52:54:00:55:35:3c Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-395535 Clientid:01:52:54:00:55:35:3c}
	I1006 13:51:10.490138  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined IP address 192.168.39.36 and MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:51:10.490256  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHKeyPath
	I1006 13:51:10.490766  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHUsername
	I1006 13:51:10.491169  744457 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/addons-395535/id_rsa Username:docker}
	I1006 13:51:10.492844  744457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38705
	I1006 13:51:10.493818  744457 main.go:141] libmachine: () Calling .GetVersion
	I1006 13:51:10.494354  744457 main.go:141] libmachine: Using API Version  1
	I1006 13:51:10.494370  744457 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 13:51:10.496135  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:51:10.496298  744457 main.go:141] libmachine: () Calling .GetMachineName
	I1006 13:51:10.496768  744457 main.go:141] libmachine: (addons-395535) Calling .GetState
	I1006 13:51:10.497299  744457 main.go:141] libmachine: (addons-395535) Calling .DriverName
	I1006 13:51:10.497524  744457 main.go:141] libmachine: (addons-395535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:35:3c", ip: ""} in network mk-addons-395535: {Iface:virbr1 ExpiryTime:2025-10-06 14:50:41 +0000 UTC Type:0 Mac:52:54:00:55:35:3c Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-395535 Clientid:01:52:54:00:55:35:3c}
	I1006 13:51:10.497794  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined IP address 192.168.39.36 and MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:51:10.497880  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHPort
	I1006 13:51:10.498089  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHKeyPath
	I1006 13:51:10.498277  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHUsername
	I1006 13:51:10.498456  744457 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/addons-395535/id_rsa Username:docker}
	I1006 13:51:10.501687  744457 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1006 13:51:10.502403  744457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45261
	I1006 13:51:10.502702  744457 main.go:141] libmachine: (addons-395535) Calling .DriverName
	I1006 13:51:10.502880  744457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35125
	I1006 13:51:10.503135  744457 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1006 13:51:10.503153  744457 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1006 13:51:10.503178  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHHostname
	I1006 13:51:10.503616  744457 main.go:141] libmachine: () Calling .GetVersion
	I1006 13:51:10.504145  744457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36887
	I1006 13:51:10.504637  744457 main.go:141] libmachine: Using API Version  1
	I1006 13:51:10.504665  744457 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 13:51:10.505035  744457 main.go:141] libmachine: () Calling .GetVersion
	I1006 13:51:10.505289  744457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38231
	I1006 13:51:10.505517  744457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41313
	I1006 13:51:10.505652  744457 main.go:141] libmachine: () Calling .GetMachineName
	I1006 13:51:10.505759  744457 main.go:141] libmachine: Using API Version  1
	I1006 13:51:10.505830  744457 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 13:51:10.506238  744457 main.go:141] libmachine: () Calling .GetVersion
	I1006 13:51:10.506344  744457 main.go:141] libmachine: () Calling .GetMachineName
	I1006 13:51:10.506538  744457 main.go:141] libmachine: (addons-395535) Calling .DriverName
	I1006 13:51:10.507167  744457 main.go:141] libmachine: () Calling .GetVersion
	I1006 13:51:10.507335  744457 main.go:141] libmachine: Using API Version  1
	I1006 13:51:10.507347  744457 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 13:51:10.507913  744457 main.go:141] libmachine: () Calling .GetMachineName
	I1006 13:51:10.508135  744457 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1006 13:51:10.508393  744457 main.go:141] libmachine: Using API Version  1
	I1006 13:51:10.508472  744457 main.go:141] libmachine: (addons-395535) Calling .GetState
	I1006 13:51:10.508493  744457 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 13:51:10.509165  744457 main.go:141] libmachine: () Calling .GetVersion
	I1006 13:51:10.509410  744457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 13:51:10.509446  744457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 13:51:10.509710  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:51:10.509841  744457 main.go:141] libmachine: () Calling .GetMachineName
	I1006 13:51:10.510188  744457 main.go:141] libmachine: (addons-395535) Calling .GetState
	I1006 13:51:10.510865  744457 main.go:141] libmachine: Using API Version  1
	I1006 13:51:10.510948  744457 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 13:51:10.511396  744457 main.go:141] libmachine: (addons-395535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:35:3c", ip: ""} in network mk-addons-395535: {Iface:virbr1 ExpiryTime:2025-10-06 14:50:41 +0000 UTC Type:0 Mac:52:54:00:55:35:3c Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-395535 Clientid:01:52:54:00:55:35:3c}
	I1006 13:51:10.511417  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined IP address 192.168.39.36 and MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:51:10.511812  744457 main.go:141] libmachine: () Calling .GetMachineName
	I1006 13:51:10.511820  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHPort
	I1006 13:51:10.511850  744457 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1006 13:51:10.511992  744457 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1006 13:51:10.512014  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHHostname
	I1006 13:51:10.512236  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHKeyPath
	I1006 13:51:10.512429  744457 main.go:141] libmachine: (addons-395535) Calling .DriverName
	I1006 13:51:10.512978  744457 main.go:141] libmachine: (addons-395535) Calling .GetState
	I1006 13:51:10.513017  744457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34349
	I1006 13:51:10.513533  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHUsername
	I1006 13:51:10.513880  744457 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/addons-395535/id_rsa Username:docker}
	I1006 13:51:10.513949  744457 main.go:141] libmachine: () Calling .GetVersion
	I1006 13:51:10.514517  744457 main.go:141] libmachine: Using API Version  1
	I1006 13:51:10.514538  744457 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 13:51:10.514757  744457 main.go:141] libmachine: (addons-395535) Calling .DriverName
	I1006 13:51:10.515666  744457 main.go:141] libmachine: () Calling .GetMachineName
	I1006 13:51:10.515728  744457 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1006 13:51:10.516109  744457 main.go:141] libmachine: (addons-395535) Calling .GetState
	I1006 13:51:10.516750  744457 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1006 13:51:10.517195  744457 main.go:141] libmachine: (addons-395535) Calling .DriverName
	I1006 13:51:10.517606  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:51:10.517795  744457 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1006 13:51:10.517815  744457 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1006 13:51:10.517845  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHHostname
	I1006 13:51:10.518942  744457 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1006 13:51:10.518965  744457 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1006 13:51:10.518985  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHHostname
	I1006 13:51:10.519580  744457 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1006 13:51:10.520017  744457 main.go:141] libmachine: (addons-395535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:35:3c", ip: ""} in network mk-addons-395535: {Iface:virbr1 ExpiryTime:2025-10-06 14:50:41 +0000 UTC Type:0 Mac:52:54:00:55:35:3c Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-395535 Clientid:01:52:54:00:55:35:3c}
	I1006 13:51:10.520043  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined IP address 192.168.39.36 and MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:51:10.520129  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHPort
	I1006 13:51:10.520339  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHKeyPath
	I1006 13:51:10.521698  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHUsername
	I1006 13:51:10.522090  744457 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/addons-395535/id_rsa Username:docker}
	I1006 13:51:10.522610  744457 main.go:141] libmachine: (addons-395535) Calling .DriverName
	I1006 13:51:10.522690  744457 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1006 13:51:10.522920  744457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43371
	I1006 13:51:10.523954  744457 main.go:141] libmachine: () Calling .GetVersion
	I1006 13:51:10.524481  744457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43825
	I1006 13:51:10.524710  744457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35621
	I1006 13:51:10.524782  744457 main.go:141] libmachine: Using API Version  1
	I1006 13:51:10.524831  744457 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 13:51:10.525015  744457 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I1006 13:51:10.525168  744457 main.go:141] libmachine: () Calling .GetVersion
	I1006 13:51:10.525287  744457 main.go:141] libmachine: () Calling .GetVersion
	I1006 13:51:10.525295  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:51:10.526957  744457 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1006 13:51:10.527030  744457 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1006 13:51:10.527041  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHPort
	I1006 13:51:10.527117  744457 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1006 13:51:10.527376  744457 main.go:141] libmachine: Using API Version  1
	I1006 13:51:10.527401  744457 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 13:51:10.527400  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHKeyPath
	I1006 13:51:10.527478  744457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38531
	I1006 13:51:10.527538  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHUsername
	I1006 13:51:10.527621  744457 main.go:141] libmachine: Using API Version  1
	I1006 13:51:10.527646  744457 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 13:51:10.527705  744457 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/addons-395535/id_rsa Username:docker}
	I1006 13:51:10.527155  744457 main.go:141] libmachine: () Calling .GetMachineName
	I1006 13:51:10.527068  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHHostname
	I1006 13:51:10.527204  744457 main.go:141] libmachine: (addons-395535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:35:3c", ip: ""} in network mk-addons-395535: {Iface:virbr1 ExpiryTime:2025-10-06 14:50:41 +0000 UTC Type:0 Mac:52:54:00:55:35:3c Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-395535 Clientid:01:52:54:00:55:35:3c}
	I1006 13:51:10.528067  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined IP address 192.168.39.36 and MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:51:10.528110  744457 main.go:141] libmachine: () Calling .GetVersion
	I1006 13:51:10.528190  744457 main.go:141] libmachine: (addons-395535) Calling .GetState
	I1006 13:51:10.529070  744457 main.go:141] libmachine: () Calling .GetMachineName
	I1006 13:51:10.529438  744457 main.go:141] libmachine: (addons-395535) Calling .GetState
	I1006 13:51:10.529447  744457 main.go:141] libmachine: () Calling .GetMachineName
	I1006 13:51:10.529439  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:51:10.529485  744457 main.go:141] libmachine: (addons-395535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:35:3c", ip: ""} in network mk-addons-395535: {Iface:virbr1 ExpiryTime:2025-10-06 14:50:41 +0000 UTC Type:0 Mac:52:54:00:55:35:3c Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-395535 Clientid:01:52:54:00:55:35:3c}
	I1006 13:51:10.529496  744457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42759
	I1006 13:51:10.529507  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined IP address 192.168.39.36 and MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:51:10.529719  744457 main.go:141] libmachine: (addons-395535) Calling .GetState
	I1006 13:51:10.529868  744457 main.go:141] libmachine: Using API Version  1
	I1006 13:51:10.529880  744457 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 13:51:10.530424  744457 main.go:141] libmachine: () Calling .GetVersion
	I1006 13:51:10.530684  744457 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1006 13:51:10.530815  744457 main.go:141] libmachine: () Calling .GetMachineName
	I1006 13:51:10.530817  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHPort
	I1006 13:51:10.531311  744457 main.go:141] libmachine: (addons-395535) Calling .GetState
	I1006 13:51:10.531522  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHKeyPath
	I1006 13:51:10.532020  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHUsername
	I1006 13:51:10.532132  744457 main.go:141] libmachine: Using API Version  1
	I1006 13:51:10.532426  744457 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 13:51:10.532554  744457 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/addons-395535/id_rsa Username:docker}
	I1006 13:51:10.532790  744457 main.go:141] libmachine: (addons-395535) Calling .DriverName
	I1006 13:51:10.532965  744457 main.go:141] libmachine: () Calling .GetMachineName
	I1006 13:51:10.533240  744457 main.go:141] libmachine: (addons-395535) Calling .GetState
	I1006 13:51:10.533819  744457 main.go:141] libmachine: (addons-395535) Calling .DriverName
	I1006 13:51:10.533828  744457 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1006 13:51:10.534206  744457 main.go:141] libmachine: (addons-395535) Calling .DriverName
	I1006 13:51:10.534543  744457 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1006 13:51:10.534560  744457 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1006 13:51:10.534579  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHHostname
	I1006 13:51:10.535115  744457 main.go:141] libmachine: (addons-395535) Calling .DriverName
	I1006 13:51:10.535809  744457 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1006 13:51:10.535913  744457 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I1006 13:51:10.536901  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:51:10.537017  744457 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1006 13:51:10.537103  744457 main.go:141] libmachine: (addons-395535) Calling .DriverName
	I1006 13:51:10.537127  744457 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1006 13:51:10.537139  744457 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1006 13:51:10.537318  744457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35103
	I1006 13:51:10.537139  744457 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1006 13:51:10.537430  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHHostname
	I1006 13:51:10.537835  744457 main.go:141] libmachine: () Calling .GetVersion
	I1006 13:51:10.538398  744457 main.go:141] libmachine: (addons-395535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:35:3c", ip: ""} in network mk-addons-395535: {Iface:virbr1 ExpiryTime:2025-10-06 14:50:41 +0000 UTC Type:0 Mac:52:54:00:55:35:3c Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-395535 Clientid:01:52:54:00:55:35:3c}
	I1006 13:51:10.538434  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined IP address 192.168.39.36 and MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:51:10.538492  744457 out.go:179]   - Using image docker.io/registry:3.0.0
	I1006 13:51:10.538666  744457 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1006 13:51:10.538956  744457 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1006 13:51:10.538973  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHHostname
	I1006 13:51:10.538670  744457 main.go:141] libmachine: Using API Version  1
	I1006 13:51:10.539093  744457 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 13:51:10.538831  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHPort
	I1006 13:51:10.539265  744457 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1006 13:51:10.539489  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHKeyPath
	I1006 13:51:10.539770  744457 main.go:141] libmachine: () Calling .GetMachineName
	I1006 13:51:10.539800  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHUsername
	I1006 13:51:10.540338  744457 main.go:141] libmachine: (addons-395535) Calling .GetState
	I1006 13:51:10.540384  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:51:10.540334  744457 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/addons-395535/id_rsa Username:docker}
	I1006 13:51:10.540524  744457 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1006 13:51:10.540578  744457 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1006 13:51:10.540613  744457 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1006 13:51:10.540637  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHHostname
	I1006 13:51:10.540804  744457 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1006 13:51:10.540820  744457 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1006 13:51:10.540846  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHHostname
	I1006 13:51:10.541245  744457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43787
	I1006 13:51:10.541924  744457 main.go:141] libmachine: (addons-395535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:35:3c", ip: ""} in network mk-addons-395535: {Iface:virbr1 ExpiryTime:2025-10-06 14:50:41 +0000 UTC Type:0 Mac:52:54:00:55:35:3c Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-395535 Clientid:01:52:54:00:55:35:3c}
	I1006 13:51:10.542029  744457 main.go:141] libmachine: () Calling .GetVersion
	I1006 13:51:10.542501  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined IP address 192.168.39.36 and MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:51:10.542690  744457 main.go:141] libmachine: Using API Version  1
	I1006 13:51:10.542716  744457 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 13:51:10.543154  744457 main.go:141] libmachine: () Calling .GetMachineName
	I1006 13:51:10.543247  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHPort
	I1006 13:51:10.543367  744457 main.go:141] libmachine: (addons-395535) Calling .GetState
	I1006 13:51:10.543460  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHKeyPath
	I1006 13:51:10.543735  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHUsername
	I1006 13:51:10.543853  744457 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1006 13:51:10.543885  744457 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/addons-395535/id_rsa Username:docker}
	I1006 13:51:10.544605  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:51:10.545273  744457 main.go:141] libmachine: (addons-395535) Calling .DriverName
	I1006 13:51:10.545273  744457 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1006 13:51:10.545496  744457 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1006 13:51:10.545518  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHHostname
	I1006 13:51:10.545863  744457 main.go:141] libmachine: (addons-395535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:35:3c", ip: ""} in network mk-addons-395535: {Iface:virbr1 ExpiryTime:2025-10-06 14:50:41 +0000 UTC Type:0 Mac:52:54:00:55:35:3c Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-395535 Clientid:01:52:54:00:55:35:3c}
	I1006 13:51:10.545908  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined IP address 192.168.39.36 and MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:51:10.546375  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:51:10.546547  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHPort
	I1006 13:51:10.546772  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHKeyPath
	I1006 13:51:10.546957  744457 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1006 13:51:10.547032  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHUsername
	I1006 13:51:10.547291  744457 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/addons-395535/id_rsa Username:docker}
	I1006 13:51:10.547655  744457 main.go:141] libmachine: (addons-395535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:35:3c", ip: ""} in network mk-addons-395535: {Iface:virbr1 ExpiryTime:2025-10-06 14:50:41 +0000 UTC Type:0 Mac:52:54:00:55:35:3c Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-395535 Clientid:01:52:54:00:55:35:3c}
	I1006 13:51:10.547687  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined IP address 192.168.39.36 and MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:51:10.548282  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHPort
	I1006 13:51:10.548507  744457 main.go:141] libmachine: (addons-395535) Calling .DriverName
	I1006 13:51:10.548793  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHKeyPath
	I1006 13:51:10.549695  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHUsername
	I1006 13:51:10.549931  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:51:10.549978  744457 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/addons-395535/id_rsa Username:docker}
	I1006 13:51:10.550685  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:51:10.550974  744457 main.go:141] libmachine: (addons-395535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:35:3c", ip: ""} in network mk-addons-395535: {Iface:virbr1 ExpiryTime:2025-10-06 14:50:41 +0000 UTC Type:0 Mac:52:54:00:55:35:3c Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-395535 Clientid:01:52:54:00:55:35:3c}
	I1006 13:51:10.550998  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined IP address 192.168.39.36 and MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:51:10.551246  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHPort
	I1006 13:51:10.551346  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:51:10.551400  744457 main.go:141] libmachine: (addons-395535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:35:3c", ip: ""} in network mk-addons-395535: {Iface:virbr1 ExpiryTime:2025-10-06 14:50:41 +0000 UTC Type:0 Mac:52:54:00:55:35:3c Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-395535 Clientid:01:52:54:00:55:35:3c}
	I1006 13:51:10.551439  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined IP address 192.168.39.36 and MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:51:10.551504  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHKeyPath
	I1006 13:51:10.551657  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHPort
	I1006 13:51:10.551717  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHUsername
	I1006 13:51:10.551833  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHKeyPath
	I1006 13:51:10.551875  744457 main.go:141] libmachine: (addons-395535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:35:3c", ip: ""} in network mk-addons-395535: {Iface:virbr1 ExpiryTime:2025-10-06 14:50:41 +0000 UTC Type:0 Mac:52:54:00:55:35:3c Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-395535 Clientid:01:52:54:00:55:35:3c}
	I1006 13:51:10.551896  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined IP address 192.168.39.36 and MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:51:10.551890  744457 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/addons-395535/id_rsa Username:docker}
	I1006 13:51:10.552044  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHUsername
	I1006 13:51:10.552181  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHPort
	I1006 13:51:10.552189  744457 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/addons-395535/id_rsa Username:docker}
	I1006 13:51:10.552366  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHKeyPath
	I1006 13:51:10.552796  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHUsername
	I1006 13:51:10.552941  744457 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/addons-395535/id_rsa Username:docker}
	I1006 13:51:10.553447  744457 out.go:179]   - Using image docker.io/busybox:stable
	I1006 13:51:10.554332  744457 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1006 13:51:10.556075  744457 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1006 13:51:10.557343  744457 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1006 13:51:10.557384  744457 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1006 13:51:10.557396  744457 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1006 13:51:10.557415  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHHostname
	I1006 13:51:10.558764  744457 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1006 13:51:10.558790  744457 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1006 13:51:10.558815  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHHostname
	I1006 13:51:10.561055  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:51:10.561493  744457 main.go:141] libmachine: (addons-395535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:35:3c", ip: ""} in network mk-addons-395535: {Iface:virbr1 ExpiryTime:2025-10-06 14:50:41 +0000 UTC Type:0 Mac:52:54:00:55:35:3c Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-395535 Clientid:01:52:54:00:55:35:3c}
	I1006 13:51:10.561539  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined IP address 192.168.39.36 and MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:51:10.561704  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHPort
	I1006 13:51:10.561954  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHKeyPath
	I1006 13:51:10.562162  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHUsername
	I1006 13:51:10.562315  744457 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/addons-395535/id_rsa Username:docker}
	I1006 13:51:10.562503  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:51:10.562930  744457 main.go:141] libmachine: (addons-395535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:35:3c", ip: ""} in network mk-addons-395535: {Iface:virbr1 ExpiryTime:2025-10-06 14:50:41 +0000 UTC Type:0 Mac:52:54:00:55:35:3c Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-395535 Clientid:01:52:54:00:55:35:3c}
	I1006 13:51:10.562954  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined IP address 192.168.39.36 and MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:51:10.563161  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHPort
	I1006 13:51:10.563341  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHKeyPath
	I1006 13:51:10.563508  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHUsername
	I1006 13:51:10.563674  744457 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/addons-395535/id_rsa Username:docker}
	W1006 13:51:10.740478  744457 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:46180->192.168.39.36:22: read: connection reset by peer
	I1006 13:51:10.740525  744457 retry.go:31] will retry after 301.595614ms: ssh: handshake failed: read tcp 192.168.39.1:46180->192.168.39.36:22: read: connection reset by peer
	W1006 13:51:10.762866  744457 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:46200->192.168.39.36:22: read: connection reset by peer
	I1006 13:51:10.762908  744457 retry.go:31] will retry after 211.904733ms: ssh: handshake failed: read tcp 192.168.39.1:46200->192.168.39.36:22: read: connection reset by peer
	I1006 13:51:11.319523  744457 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 13:51:11.319650  744457 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1006 13:51:11.437749  744457 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1006 13:51:11.437788  744457 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1006 13:51:11.440895  744457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1006 13:51:11.445864  744457 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1006 13:51:11.445891  744457 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1006 13:51:11.446598  744457 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1006 13:51:11.446620  744457 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1006 13:51:11.448539  744457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1006 13:51:11.532180  744457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1006 13:51:11.546796  744457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1006 13:51:11.556840  744457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 13:51:11.573414  744457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1006 13:51:11.578660  744457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1006 13:51:11.583584  744457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1006 13:51:11.600361  744457 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1006 13:51:11.600401  744457 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1006 13:51:11.604756  744457 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1006 13:51:11.604786  744457 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1006 13:51:11.627075  744457 node_ready.go:35] waiting up to 6m0s for node "addons-395535" to be "Ready" ...
	I1006 13:51:11.634536  744457 node_ready.go:49] node "addons-395535" is "Ready"
	I1006 13:51:11.634573  744457 node_ready.go:38] duration metric: took 7.438344ms for node "addons-395535" to be "Ready" ...
	I1006 13:51:11.634599  744457 api_server.go:52] waiting for apiserver process to appear ...
	I1006 13:51:11.634658  744457 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 13:51:11.814560  744457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1006 13:51:11.928635  744457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 13:51:12.077112  744457 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1006 13:51:12.077143  744457 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1006 13:51:12.155495  744457 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1006 13:51:12.155526  744457 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1006 13:51:12.231333  744457 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1006 13:51:12.231373  744457 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1006 13:51:12.277539  744457 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1006 13:51:12.277565  744457 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1006 13:51:12.490060  744457 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1006 13:51:12.490090  744457 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1006 13:51:12.789240  744457 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1006 13:51:12.789331  744457 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1006 13:51:12.928666  744457 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1006 13:51:12.928703  744457 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1006 13:51:12.956612  744457 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1006 13:51:12.956649  744457 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1006 13:51:12.970000  744457 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1006 13:51:12.970031  744457 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1006 13:51:13.018118  744457 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1006 13:51:13.018153  744457 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1006 13:51:13.040214  744457 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1006 13:51:13.040247  744457 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1006 13:51:13.211451  744457 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1006 13:51:13.211475  744457 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1006 13:51:13.375362  744457 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1006 13:51:13.375394  744457 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1006 13:51:13.432668  744457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1006 13:51:13.724732  744457 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1006 13:51:13.724763  744457 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1006 13:51:13.733120  744457 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1006 13:51:13.733144  744457 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1006 13:51:13.740964  744457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1006 13:51:13.844161  744457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1006 13:51:14.049930  744457 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1006 13:51:14.049956  744457 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1006 13:51:14.182458  744457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1006 13:51:14.541526  744457 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1006 13:51:14.541561  744457 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1006 13:51:15.078174  744457 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1006 13:51:15.078197  744457 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1006 13:51:15.316365  744457 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.875420849s)
	I1006 13:51:15.316389  744457 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.996674706s)
	I1006 13:51:15.316426  744457 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1006 13:51:15.316444  744457 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (3.867872808s)
	I1006 13:51:15.316451  744457 main.go:141] libmachine: Making call to close driver server
	I1006 13:51:15.316469  744457 main.go:141] libmachine: (addons-395535) Calling .Close
	I1006 13:51:15.316478  744457 main.go:141] libmachine: Making call to close driver server
	I1006 13:51:15.316491  744457 main.go:141] libmachine: (addons-395535) Calling .Close
	I1006 13:51:15.316838  744457 main.go:141] libmachine: Successfully made call to close driver server
	I1006 13:51:15.316846  744457 main.go:141] libmachine: Successfully made call to close driver server
	I1006 13:51:15.316860  744457 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 13:51:15.316862  744457 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 13:51:15.316865  744457 main.go:141] libmachine: (addons-395535) DBG | Closing plugin on server side
	I1006 13:51:15.316871  744457 main.go:141] libmachine: Making call to close driver server
	I1006 13:51:15.316872  744457 main.go:141] libmachine: Making call to close driver server
	I1006 13:51:15.316880  744457 main.go:141] libmachine: (addons-395535) Calling .Close
	I1006 13:51:15.316885  744457 main.go:141] libmachine: (addons-395535) Calling .Close
	I1006 13:51:15.317180  744457 main.go:141] libmachine: (addons-395535) DBG | Closing plugin on server side
	I1006 13:51:15.317214  744457 main.go:141] libmachine: Successfully made call to close driver server
	I1006 13:51:15.317236  744457 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 13:51:15.317298  744457 main.go:141] libmachine: Successfully made call to close driver server
	I1006 13:51:15.317315  744457 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 13:51:15.317324  744457 main.go:141] libmachine: (addons-395535) DBG | Closing plugin on server side
	I1006 13:51:15.455791  744457 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.923564303s)
	I1006 13:51:15.455843  744457 main.go:141] libmachine: Making call to close driver server
	I1006 13:51:15.455854  744457 main.go:141] libmachine: (addons-395535) Calling .Close
	I1006 13:51:15.456105  744457 main.go:141] libmachine: Successfully made call to close driver server
	I1006 13:51:15.456151  744457 main.go:141] libmachine: (addons-395535) DBG | Closing plugin on server side
	I1006 13:51:15.456177  744457 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 13:51:15.456191  744457 main.go:141] libmachine: Making call to close driver server
	I1006 13:51:15.456203  744457 main.go:141] libmachine: (addons-395535) Calling .Close
	I1006 13:51:15.456447  744457 main.go:141] libmachine: Successfully made call to close driver server
	I1006 13:51:15.456461  744457 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 13:51:15.578249  744457 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1006 13:51:15.578286  744457 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1006 13:51:15.856744  744457 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-395535" context rescaled to 1 replicas
	I1006 13:51:15.883312  744457 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1006 13:51:15.883348  744457 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1006 13:51:16.399544  744457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1006 13:51:17.697999  744457 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.15115231s)
	I1006 13:51:17.698073  744457 main.go:141] libmachine: Making call to close driver server
	I1006 13:51:17.698079  744457 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.141204786s)
	I1006 13:51:17.698091  744457 main.go:141] libmachine: (addons-395535) Calling .Close
	I1006 13:51:17.698107  744457 main.go:141] libmachine: Making call to close driver server
	I1006 13:51:17.698121  744457 main.go:141] libmachine: (addons-395535) Calling .Close
	I1006 13:51:17.698144  744457 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.12468383s)
	I1006 13:51:17.698197  744457 main.go:141] libmachine: Making call to close driver server
	I1006 13:51:17.698210  744457 main.go:141] libmachine: (addons-395535) Calling .Close
	I1006 13:51:17.698405  744457 main.go:141] libmachine: Successfully made call to close driver server
	I1006 13:51:17.698422  744457 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 13:51:17.698432  744457 main.go:141] libmachine: Making call to close driver server
	I1006 13:51:17.698439  744457 main.go:141] libmachine: (addons-395535) Calling .Close
	I1006 13:51:17.698607  744457 main.go:141] libmachine: (addons-395535) DBG | Closing plugin on server side
	I1006 13:51:17.698632  744457 main.go:141] libmachine: (addons-395535) DBG | Closing plugin on server side
	I1006 13:51:17.698640  744457 main.go:141] libmachine: Successfully made call to close driver server
	I1006 13:51:17.698649  744457 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 13:51:17.698656  744457 main.go:141] libmachine: (addons-395535) DBG | Closing plugin on server side
	I1006 13:51:17.698656  744457 main.go:141] libmachine: Successfully made call to close driver server
	I1006 13:51:17.698664  744457 main.go:141] libmachine: Making call to close driver server
	I1006 13:51:17.698668  744457 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 13:51:17.698672  744457 main.go:141] libmachine: (addons-395535) Calling .Close
	I1006 13:51:17.698734  744457 main.go:141] libmachine: Successfully made call to close driver server
	I1006 13:51:17.698748  744457 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 13:51:17.698757  744457 main.go:141] libmachine: Making call to close driver server
	I1006 13:51:17.698786  744457 main.go:141] libmachine: (addons-395535) Calling .Close
	I1006 13:51:17.699129  744457 main.go:141] libmachine: (addons-395535) DBG | Closing plugin on server side
	I1006 13:51:17.699161  744457 main.go:141] libmachine: Successfully made call to close driver server
	I1006 13:51:17.699168  744457 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 13:51:17.699213  744457 main.go:141] libmachine: (addons-395535) DBG | Closing plugin on server side
	I1006 13:51:17.699265  744457 main.go:141] libmachine: Successfully made call to close driver server
	I1006 13:51:17.699279  744457 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 13:51:17.958740  744457 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1006 13:51:17.958798  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHHostname
	I1006 13:51:17.963133  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:51:17.963693  744457 main.go:141] libmachine: (addons-395535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:35:3c", ip: ""} in network mk-addons-395535: {Iface:virbr1 ExpiryTime:2025-10-06 14:50:41 +0000 UTC Type:0 Mac:52:54:00:55:35:3c Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-395535 Clientid:01:52:54:00:55:35:3c}
	I1006 13:51:17.963726  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined IP address 192.168.39.36 and MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:51:17.963963  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHPort
	I1006 13:51:17.964300  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHKeyPath
	I1006 13:51:17.964502  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHUsername
	I1006 13:51:17.964701  744457 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/addons-395535/id_rsa Username:docker}
	I1006 13:51:18.225718  744457 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1006 13:51:18.507295  744457 addons.go:238] Setting addon gcp-auth=true in "addons-395535"
	I1006 13:51:18.507363  744457 host.go:66] Checking if "addons-395535" exists ...
	I1006 13:51:18.507766  744457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 13:51:18.507807  744457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 13:51:18.522326  744457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43015
	I1006 13:51:18.522852  744457 main.go:141] libmachine: () Calling .GetVersion
	I1006 13:51:18.523481  744457 main.go:141] libmachine: Using API Version  1
	I1006 13:51:18.523512  744457 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 13:51:18.523993  744457 main.go:141] libmachine: () Calling .GetMachineName
	I1006 13:51:18.524518  744457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 13:51:18.524551  744457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 13:51:18.538774  744457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38325
	I1006 13:51:18.539429  744457 main.go:141] libmachine: () Calling .GetVersion
	I1006 13:51:18.540021  744457 main.go:141] libmachine: Using API Version  1
	I1006 13:51:18.540051  744457 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 13:51:18.540445  744457 main.go:141] libmachine: () Calling .GetMachineName
	I1006 13:51:18.540705  744457 main.go:141] libmachine: (addons-395535) Calling .GetState
	I1006 13:51:18.542426  744457 main.go:141] libmachine: (addons-395535) Calling .DriverName
	I1006 13:51:18.542695  744457 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1006 13:51:18.542721  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHHostname
	I1006 13:51:18.546083  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:51:18.546642  744457 main.go:141] libmachine: (addons-395535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:35:3c", ip: ""} in network mk-addons-395535: {Iface:virbr1 ExpiryTime:2025-10-06 14:50:41 +0000 UTC Type:0 Mac:52:54:00:55:35:3c Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-395535 Clientid:01:52:54:00:55:35:3c}
	I1006 13:51:18.546680  744457 main.go:141] libmachine: (addons-395535) DBG | domain addons-395535 has defined IP address 192.168.39.36 and MAC address 52:54:00:55:35:3c in network mk-addons-395535
	I1006 13:51:18.546973  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHPort
	I1006 13:51:18.547209  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHKeyPath
	I1006 13:51:18.547395  744457 main.go:141] libmachine: (addons-395535) Calling .GetSSHUsername
	I1006 13:51:18.547570  744457 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/addons-395535/id_rsa Username:docker}
	I1006 13:51:20.533324  744457 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.954620913s)
	I1006 13:51:20.533395  744457 main.go:141] libmachine: Making call to close driver server
	I1006 13:51:20.533324  744457 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.949684925s)
	I1006 13:51:20.533464  744457 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.71887298s)
	I1006 13:51:20.533498  744457 main.go:141] libmachine: Making call to close driver server
	I1006 13:51:20.533462  744457 main.go:141] libmachine: Making call to close driver server
	I1006 13:51:20.533530  744457 main.go:141] libmachine: (addons-395535) Calling .Close
	I1006 13:51:20.533565  744457 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (8.604895888s)
	I1006 13:51:20.533408  744457 main.go:141] libmachine: (addons-395535) Calling .Close
	W1006 13:51:20.533614  744457 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 13:51:20.533674  744457 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.100969112s)
	I1006 13:51:20.533684  744457 retry.go:31] will retry after 170.360871ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 13:51:20.533398  744457 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (8.898714076s)
	I1006 13:51:20.533707  744457 main.go:141] libmachine: Making call to close driver server
	I1006 13:51:20.533723  744457 main.go:141] libmachine: (addons-395535) Calling .Close
	I1006 13:51:20.533727  744457 api_server.go:72] duration metric: took 10.155714541s to wait for apiserver process to appear ...
	I1006 13:51:20.533748  744457 api_server.go:88] waiting for apiserver healthz status ...
	I1006 13:51:20.533776  744457 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.689587056s)
	I1006 13:51:20.533737  744457 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.792709375s)
	I1006 13:51:20.533514  744457 main.go:141] libmachine: (addons-395535) Calling .Close
	I1006 13:51:20.533795  744457 main.go:141] libmachine: Making call to close driver server
	I1006 13:51:20.533799  744457 main.go:141] libmachine: Making call to close driver server
	I1006 13:51:20.533806  744457 main.go:141] libmachine: (addons-395535) Calling .Close
	I1006 13:51:20.533811  744457 main.go:141] libmachine: (addons-395535) Calling .Close
	I1006 13:51:20.533876  744457 main.go:141] libmachine: (addons-395535) DBG | Closing plugin on server side
	I1006 13:51:20.533880  744457 main.go:141] libmachine: (addons-395535) DBG | Closing plugin on server side
	I1006 13:51:20.533890  744457 main.go:141] libmachine: Successfully made call to close driver server
	I1006 13:51:20.533778  744457 api_server.go:253] Checking apiserver healthz at https://192.168.39.36:8443/healthz ...
	I1006 13:51:20.533901  744457 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 13:51:20.533911  744457 main.go:141] libmachine: Making call to close driver server
	I1006 13:51:20.533920  744457 main.go:141] libmachine: (addons-395535) Calling .Close
	I1006 13:51:20.533921  744457 main.go:141] libmachine: Successfully made call to close driver server
	I1006 13:51:20.533933  744457 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 13:51:20.533944  744457 main.go:141] libmachine: Making call to close driver server
	I1006 13:51:20.533951  744457 main.go:141] libmachine: (addons-395535) Calling .Close
	I1006 13:51:20.533955  744457 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.351455783s)
	W1006 13:51:20.533985  744457 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1006 13:51:20.533987  744457 main.go:141] libmachine: (addons-395535) DBG | Closing plugin on server side
	I1006 13:51:20.533999  744457 retry.go:31] will retry after 362.703695ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1006 13:51:20.534013  744457 main.go:141] libmachine: Successfully made call to close driver server
	I1006 13:51:20.534023  744457 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 13:51:20.534031  744457 main.go:141] libmachine: Making call to close driver server
	I1006 13:51:20.534038  744457 main.go:141] libmachine: (addons-395535) Calling .Close
	I1006 13:51:20.534464  744457 main.go:141] libmachine: (addons-395535) DBG | Closing plugin on server side
	I1006 13:51:20.534494  744457 main.go:141] libmachine: Successfully made call to close driver server
	I1006 13:51:20.534500  744457 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 13:51:20.534507  744457 main.go:141] libmachine: Making call to close driver server
	I1006 13:51:20.534513  744457 main.go:141] libmachine: (addons-395535) Calling .Close
	I1006 13:51:20.534560  744457 main.go:141] libmachine: (addons-395535) DBG | Closing plugin on server side
	I1006 13:51:20.534576  744457 main.go:141] libmachine: (addons-395535) DBG | Closing plugin on server side
	I1006 13:51:20.534578  744457 main.go:141] libmachine: Successfully made call to close driver server
	I1006 13:51:20.534605  744457 main.go:141] libmachine: (addons-395535) DBG | Closing plugin on server side
	I1006 13:51:20.534609  744457 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 13:51:20.534625  744457 main.go:141] libmachine: Successfully made call to close driver server
	I1006 13:51:20.534631  744457 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 13:51:20.534638  744457 main.go:141] libmachine: Making call to close driver server
	I1006 13:51:20.534644  744457 main.go:141] libmachine: (addons-395535) Calling .Close
	I1006 13:51:20.534687  744457 main.go:141] libmachine: (addons-395535) DBG | Closing plugin on server side
	I1006 13:51:20.534705  744457 main.go:141] libmachine: Successfully made call to close driver server
	I1006 13:51:20.534711  744457 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 13:51:20.534718  744457 main.go:141] libmachine: Making call to close driver server
	I1006 13:51:20.534723  744457 main.go:141] libmachine: (addons-395535) Calling .Close
	I1006 13:51:20.534824  744457 main.go:141] libmachine: Successfully made call to close driver server
	I1006 13:51:20.534824  744457 main.go:141] libmachine: Successfully made call to close driver server
	I1006 13:51:20.534833  744457 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 13:51:20.534837  744457 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 13:51:20.534845  744457 addons.go:479] Verifying addon metrics-server=true in "addons-395535"
	I1006 13:51:20.534847  744457 addons.go:479] Verifying addon ingress=true in "addons-395535"
	I1006 13:51:20.535632  744457 main.go:141] libmachine: (addons-395535) DBG | Closing plugin on server side
	I1006 13:51:20.535664  744457 main.go:141] libmachine: Successfully made call to close driver server
	I1006 13:51:20.535672  744457 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 13:51:20.535679  744457 addons.go:479] Verifying addon registry=true in "addons-395535"
	I1006 13:51:20.536325  744457 main.go:141] libmachine: Successfully made call to close driver server
	I1006 13:51:20.536342  744457 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 13:51:20.536721  744457 main.go:141] libmachine: Successfully made call to close driver server
	I1006 13:51:20.536738  744457 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 13:51:20.540203  744457 out.go:179] * Verifying registry addon...
	I1006 13:51:20.540205  744457 out.go:179] * Verifying ingress addon...
	I1006 13:51:20.540205  744457 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-395535 service yakd-dashboard -n yakd-dashboard
	
	I1006 13:51:20.542528  744457 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1006 13:51:20.542528  744457 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1006 13:51:20.587212  744457 api_server.go:279] https://192.168.39.36:8443/healthz returned 200:
	ok
	I1006 13:51:20.600330  744457 api_server.go:141] control plane version: v1.34.1
	I1006 13:51:20.600361  744457 api_server.go:131] duration metric: took 66.601239ms to wait for apiserver health ...
	I1006 13:51:20.600371  744457 system_pods.go:43] waiting for kube-system pods to appear ...
	I1006 13:51:20.618490  744457 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1006 13:51:20.618514  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:20.618643  744457 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1006 13:51:20.618669  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:20.661290  744457 system_pods.go:59] 16 kube-system pods found
	I1006 13:51:20.661350  744457 system_pods.go:61] "amd-gpu-device-plugin-c5865" [3e17cce6-7fa4-4192-a773-8370967be6ba] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1006 13:51:20.661362  744457 system_pods.go:61] "coredns-66bc5c9577-6fw22" [460916b0-5247-49dc-8a2b-987d81276af1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 13:51:20.661370  744457 system_pods.go:61] "coredns-66bc5c9577-x925l" [6d6bbf26-9348-460c-930c-a184820cf4fb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 13:51:20.661380  744457 system_pods.go:61] "etcd-addons-395535" [3ae3dc45-113f-425a-bb9e-f61d9ab6838b] Running
	I1006 13:51:20.661388  744457 system_pods.go:61] "kube-apiserver-addons-395535" [9535914b-5c31-4a95-947f-234ecfc68b46] Running
	I1006 13:51:20.661393  744457 system_pods.go:61] "kube-controller-manager-addons-395535" [15bfddfe-83d6-4306-a929-766f614964e0] Running
	I1006 13:51:20.661402  744457 system_pods.go:61] "kube-ingress-dns-minikube" [a378a875-cf6b-48a9-81bb-81d31cb79673] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1006 13:51:20.661413  744457 system_pods.go:61] "kube-proxy-8xc6l" [39c56ecc-49e5-48fb-b175-8d023805f407] Running
	I1006 13:51:20.661419  744457 system_pods.go:61] "kube-scheduler-addons-395535" [ce2a9ec5-7d9f-4f53-b680-88fab95728bf] Running
	I1006 13:51:20.661431  744457 system_pods.go:61] "metrics-server-85b7d694d7-zdqg2" [2c5c0f60-39b7-49e4-9308-804e749198d4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1006 13:51:20.661440  744457 system_pods.go:61] "nvidia-device-plugin-daemonset-grxdz" [64007d43-4ee6-4ad1-8000-d38b65a402e2] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1006 13:51:20.661446  744457 system_pods.go:61] "registry-66898fdd98-6wslm" [ac13d99a-af77-4a4d-ad44-a574c23cb352] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1006 13:51:20.661451  744457 system_pods.go:61] "registry-creds-764b6fb674-shdbg" [5ce3fa1c-bbd6-4b9f-a7af-eaf1d86b7206] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1006 13:51:20.661456  744457 system_pods.go:61] "registry-proxy-kh2xs" [2b9dfc63-a725-49d8-a06d-8607e45aacbd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1006 13:51:20.661462  744457 system_pods.go:61] "snapshot-controller-7d9fbc56b8-7t84n" [ec6d231d-9f9e-40de-8e49-c4ea70c07708] Pending
	I1006 13:51:20.661467  744457 system_pods.go:61] "storage-provisioner" [f0a66d8d-0033-489d-b301-bf2fcc689b91] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1006 13:51:20.661475  744457 system_pods.go:74] duration metric: took 61.097661ms to wait for pod list to return data ...
	I1006 13:51:20.661485  744457 default_sa.go:34] waiting for default service account to be created ...
	I1006 13:51:20.704998  744457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 13:51:20.726324  744457 main.go:141] libmachine: Making call to close driver server
	I1006 13:51:20.726350  744457 main.go:141] libmachine: (addons-395535) Calling .Close
	I1006 13:51:20.726694  744457 main.go:141] libmachine: Successfully made call to close driver server
	I1006 13:51:20.726715  744457 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 13:51:20.726746  744457 main.go:141] libmachine: (addons-395535) DBG | Closing plugin on server side
	W1006 13:51:20.726832  744457 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1006 13:51:20.860194  744457 main.go:141] libmachine: Making call to close driver server
	I1006 13:51:20.860218  744457 main.go:141] libmachine: (addons-395535) Calling .Close
	I1006 13:51:20.860577  744457 main.go:141] libmachine: Successfully made call to close driver server
	I1006 13:51:20.860619  744457 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 13:51:20.885101  744457 default_sa.go:45] found service account: "default"
	I1006 13:51:20.885147  744457 default_sa.go:55] duration metric: took 223.652964ms for default service account to be created ...
	I1006 13:51:20.885160  744457 system_pods.go:116] waiting for k8s-apps to be running ...
	I1006 13:51:20.897316  744457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1006 13:51:20.995341  744457 system_pods.go:86] 17 kube-system pods found
	I1006 13:51:20.995387  744457 system_pods.go:89] "amd-gpu-device-plugin-c5865" [3e17cce6-7fa4-4192-a773-8370967be6ba] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1006 13:51:20.995398  744457 system_pods.go:89] "coredns-66bc5c9577-6fw22" [460916b0-5247-49dc-8a2b-987d81276af1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 13:51:20.995413  744457 system_pods.go:89] "coredns-66bc5c9577-x925l" [6d6bbf26-9348-460c-930c-a184820cf4fb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 13:51:20.995420  744457 system_pods.go:89] "etcd-addons-395535" [3ae3dc45-113f-425a-bb9e-f61d9ab6838b] Running
	I1006 13:51:20.995427  744457 system_pods.go:89] "kube-apiserver-addons-395535" [9535914b-5c31-4a95-947f-234ecfc68b46] Running
	I1006 13:51:20.995433  744457 system_pods.go:89] "kube-controller-manager-addons-395535" [15bfddfe-83d6-4306-a929-766f614964e0] Running
	I1006 13:51:20.995445  744457 system_pods.go:89] "kube-ingress-dns-minikube" [a378a875-cf6b-48a9-81bb-81d31cb79673] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1006 13:51:20.995454  744457 system_pods.go:89] "kube-proxy-8xc6l" [39c56ecc-49e5-48fb-b175-8d023805f407] Running
	I1006 13:51:20.995458  744457 system_pods.go:89] "kube-scheduler-addons-395535" [ce2a9ec5-7d9f-4f53-b680-88fab95728bf] Running
	I1006 13:51:20.995462  744457 system_pods.go:89] "metrics-server-85b7d694d7-zdqg2" [2c5c0f60-39b7-49e4-9308-804e749198d4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1006 13:51:20.995469  744457 system_pods.go:89] "nvidia-device-plugin-daemonset-grxdz" [64007d43-4ee6-4ad1-8000-d38b65a402e2] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1006 13:51:20.995474  744457 system_pods.go:89] "registry-66898fdd98-6wslm" [ac13d99a-af77-4a4d-ad44-a574c23cb352] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1006 13:51:20.995479  744457 system_pods.go:89] "registry-creds-764b6fb674-shdbg" [5ce3fa1c-bbd6-4b9f-a7af-eaf1d86b7206] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1006 13:51:20.995487  744457 system_pods.go:89] "registry-proxy-kh2xs" [2b9dfc63-a725-49d8-a06d-8607e45aacbd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1006 13:51:20.995494  744457 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7t84n" [ec6d231d-9f9e-40de-8e49-c4ea70c07708] Pending
	I1006 13:51:20.995500  744457 system_pods.go:89] "snapshot-controller-7d9fbc56b8-k45m4" [d6cca6d5-c5e7-492b-a0db-986e257b1ccc] Pending
	I1006 13:51:20.995507  744457 system_pods.go:89] "storage-provisioner" [f0a66d8d-0033-489d-b301-bf2fcc689b91] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1006 13:51:20.995519  744457 system_pods.go:126] duration metric: took 110.350274ms to wait for k8s-apps to be running ...
	I1006 13:51:20.995535  744457 system_svc.go:44] waiting for kubelet service to be running ....
	I1006 13:51:20.995605  744457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 13:51:21.082387  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:21.083303  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:21.572582  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:21.578314  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:22.028083  744457 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.628480959s)
	I1006 13:51:22.028122  744457 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.485402732s)
	I1006 13:51:22.028156  744457 main.go:141] libmachine: Making call to close driver server
	I1006 13:51:22.028175  744457 main.go:141] libmachine: (addons-395535) Calling .Close
	I1006 13:51:22.028580  744457 main.go:141] libmachine: (addons-395535) DBG | Closing plugin on server side
	I1006 13:51:22.028651  744457 main.go:141] libmachine: Successfully made call to close driver server
	I1006 13:51:22.028664  744457 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 13:51:22.028677  744457 main.go:141] libmachine: Making call to close driver server
	I1006 13:51:22.028694  744457 main.go:141] libmachine: (addons-395535) Calling .Close
	I1006 13:51:22.028972  744457 main.go:141] libmachine: Successfully made call to close driver server
	I1006 13:51:22.028994  744457 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 13:51:22.029008  744457 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-395535"
	I1006 13:51:22.031008  744457 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1006 13:51:22.031824  744457 out.go:179] * Verifying csi-hostpath-driver addon...
	I1006 13:51:22.033410  744457 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1006 13:51:22.034414  744457 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1006 13:51:22.034806  744457 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1006 13:51:22.034827  744457 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1006 13:51:22.073975  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:22.077437  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:22.077474  744457 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1006 13:51:22.077494  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:22.182653  744457 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1006 13:51:22.182681  744457 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1006 13:51:22.328496  744457 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1006 13:51:22.328519  744457 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1006 13:51:22.465717  744457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1006 13:51:22.542307  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:22.547209  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:22.551769  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:23.042279  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:23.049629  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:23.055099  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:23.540876  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:23.548013  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:23.549995  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:24.050159  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:24.050312  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:24.055839  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:24.267629  744457 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.562585836s)
	I1006 13:51:24.267661  744457 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.370277424s)
	W1006 13:51:24.267685  744457 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 13:51:24.267706  744457 retry.go:31] will retry after 289.517043ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 13:51:24.267727  744457 main.go:141] libmachine: Making call to close driver server
	I1006 13:51:24.267761  744457 main.go:141] libmachine: (addons-395535) Calling .Close
	I1006 13:51:24.267743  744457 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (3.272122716s)
	I1006 13:51:24.267840  744457 system_svc.go:56] duration metric: took 3.272296033s WaitForService to wait for kubelet
	I1006 13:51:24.267864  744457 kubeadm.go:586] duration metric: took 13.889852506s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 13:51:24.267896  744457 node_conditions.go:102] verifying NodePressure condition ...
	I1006 13:51:24.268235  744457 main.go:141] libmachine: (addons-395535) DBG | Closing plugin on server side
	I1006 13:51:24.268258  744457 main.go:141] libmachine: Successfully made call to close driver server
	I1006 13:51:24.268276  744457 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 13:51:24.268285  744457 main.go:141] libmachine: Making call to close driver server
	I1006 13:51:24.268314  744457 main.go:141] libmachine: (addons-395535) Calling .Close
	I1006 13:51:24.268654  744457 main.go:141] libmachine: Successfully made call to close driver server
	I1006 13:51:24.268671  744457 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 13:51:24.284138  744457 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1006 13:51:24.284179  744457 node_conditions.go:123] node cpu capacity is 2
	I1006 13:51:24.284194  744457 node_conditions.go:105] duration metric: took 16.29018ms to run NodePressure ...
	I1006 13:51:24.284208  744457 start.go:241] waiting for startup goroutines ...
	I1006 13:51:24.411261  744457 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.945496131s)
	I1006 13:51:24.411336  744457 main.go:141] libmachine: Making call to close driver server
	I1006 13:51:24.411347  744457 main.go:141] libmachine: (addons-395535) Calling .Close
	I1006 13:51:24.411698  744457 main.go:141] libmachine: Successfully made call to close driver server
	I1006 13:51:24.411723  744457 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 13:51:24.411732  744457 main.go:141] libmachine: Making call to close driver server
	I1006 13:51:24.411744  744457 main.go:141] libmachine: (addons-395535) DBG | Closing plugin on server side
	I1006 13:51:24.411750  744457 main.go:141] libmachine: (addons-395535) Calling .Close
	I1006 13:51:24.412075  744457 main.go:141] libmachine: Successfully made call to close driver server
	I1006 13:51:24.412089  744457 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 13:51:24.413476  744457 addons.go:479] Verifying addon gcp-auth=true in "addons-395535"
	I1006 13:51:24.415248  744457 out.go:179] * Verifying gcp-auth addon...
	I1006 13:51:24.417325  744457 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1006 13:51:24.446747  744457 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1006 13:51:24.446769  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:24.542143  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:24.546978  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:24.549383  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:24.558332  744457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 13:51:24.922090  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:25.040026  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:25.048933  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:25.051826  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:25.424086  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:25.544607  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:25.554037  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:25.561233  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:25.931501  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:26.015465  744457 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.457092698s)
	W1006 13:51:26.015512  744457 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 13:51:26.015532  744457 retry.go:31] will retry after 290.894251ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 13:51:26.053683  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:26.053703  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:26.053944  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:26.307317  744457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 13:51:26.426314  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:26.539899  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:26.546700  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:26.546751  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:26.925630  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:27.039533  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:27.050504  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:27.053162  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:27.424192  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:27.457565  744457 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.150193718s)
	W1006 13:51:27.457654  744457 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 13:51:27.457689  744457 retry.go:31] will retry after 738.668395ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 13:51:27.537944  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:27.551103  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:27.552267  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:27.922131  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:28.041334  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:28.048422  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:28.051113  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:28.197332  744457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 13:51:28.421206  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:28.540070  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:28.548498  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:28.549435  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:28.925095  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:29.038533  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:29.046786  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:29.047152  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:29.256813  744457 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.059430994s)
	W1006 13:51:29.256868  744457 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 13:51:29.256902  744457 retry.go:31] will retry after 1.111251077s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 13:51:29.424196  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:29.538771  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:29.550896  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:29.551439  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:29.923298  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:30.038819  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:30.045872  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:30.046120  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:30.368743  744457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 13:51:30.425280  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:30.540742  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:30.548570  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:30.549278  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:30.921146  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:31.039997  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:31.048808  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:31.050710  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:31.408812  744457 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.040021792s)
	W1006 13:51:31.408865  744457 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 13:51:31.408887  744457 retry.go:31] will retry after 1.723224637s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 13:51:31.423355  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:31.540332  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:31.550612  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:31.551417  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:31.924635  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:32.039521  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:32.051196  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:32.052702  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:32.423083  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:32.540950  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:32.547194  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:32.548649  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:32.922821  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:33.040970  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:33.045698  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:33.045757  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:33.132868  744457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 13:51:33.425417  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:33.543999  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:33.547384  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:33.552802  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:33.948335  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:34.039106  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:34.046925  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:34.049159  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:34.431899  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:34.509261  744457 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.37633636s)
	W1006 13:51:34.509335  744457 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 13:51:34.509428  744457 retry.go:31] will retry after 2.934918794s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 13:51:34.655793  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:34.660134  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:34.661017  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:34.923288  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:35.040217  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:35.047725  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:35.050069  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:35.425304  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:35.539357  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:35.549930  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:35.551752  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:36.220789  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:36.432898  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:36.433082  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:36.433211  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:36.433833  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:36.539197  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:36.549702  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:36.551720  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:36.922740  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:37.038498  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:37.046157  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:37.046382  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:37.423599  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:37.444797  744457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 13:51:37.540760  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:37.548382  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:37.548915  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:37.920944  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:38.039258  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:38.053601  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:38.057261  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:38.424010  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:38.448363  744457 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.00351875s)
	W1006 13:51:38.448411  744457 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 13:51:38.448432  744457 retry.go:31] will retry after 3.91347115s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 13:51:38.669179  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:38.670144  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:38.670845  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:38.922814  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:39.041097  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:39.046189  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:39.048139  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:39.423602  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:39.538177  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:39.548234  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:39.552558  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:39.925516  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:40.038219  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:40.046526  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:40.047725  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:40.464261  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:40.542513  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:40.546536  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:40.548029  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:40.924113  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:41.041141  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:41.045708  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:41.046038  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:41.422468  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:41.540150  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:41.546639  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:41.546845  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:41.921469  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:42.039083  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:42.046266  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:42.047637  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:42.362941  744457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 13:51:42.422681  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:42.538620  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:42.548236  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:42.549280  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:42.925101  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:43.038777  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:43.047313  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:43.049918  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1006 13:51:43.108774  744457 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 13:51:43.108820  744457 retry.go:31] will retry after 4.449667332s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 13:51:43.422929  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:43.542233  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:43.547044  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:43.548444  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:43.921219  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:44.045195  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:44.050197  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:44.050462  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:44.422663  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:44.541324  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:44.546602  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:44.548866  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:44.924228  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:45.039567  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:45.048691  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:45.049131  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:45.424266  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:45.542213  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:45.548858  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:45.549416  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:45.922102  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:46.040222  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:46.051118  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:46.051165  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:46.421010  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:46.552459  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:46.552726  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:46.552933  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:46.927452  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:47.038430  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:47.049819  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:47.052178  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:47.424516  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:47.539286  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:47.546141  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:47.547736  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:47.558837  744457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 13:51:47.921230  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:48.042866  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:48.047743  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:48.049242  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:48.440398  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:48.540925  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:48.547285  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:48.547533  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:48.733996  744457 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.175108689s)
	W1006 13:51:48.734056  744457 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 13:51:48.734080  744457 retry.go:31] will retry after 10.659659559s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 13:51:48.922131  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:49.043032  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:49.047315  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:49.048290  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:49.430980  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:49.540286  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:49.550225  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:49.550223  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:49.931949  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:50.048583  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:50.057389  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:50.058083  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:50.426985  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:50.550040  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:50.550191  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:50.551883  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:51.018070  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:51.047095  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:51.061280  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:51.073285  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:51.427317  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:51.542917  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:51.550196  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:51.553229  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:51.926447  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:52.041937  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:52.054494  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:52.060767  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:52.423095  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:52.543207  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:52.553131  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:52.554323  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:52.924784  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:53.052378  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:53.058843  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:53.063051  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:53.424145  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:53.540784  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:53.552434  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:53.555499  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:53.924198  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:54.040802  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:54.048002  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:54.048167  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:54.423561  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:54.543047  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:54.548789  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:54.549576  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:54.923540  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:55.040725  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:55.050431  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:55.050696  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:55.423329  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:55.538862  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:55.549141  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:55.550199  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:55.923276  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:56.040431  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:56.050914  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:56.053509  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:56.586225  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:56.587912  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:56.588139  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:56.588834  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:56.923293  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:57.041151  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:57.050835  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:57.050977  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:57.423968  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:57.541157  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:57.550769  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:57.552392  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:57.921176  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:58.041569  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:58.047768  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:58.048470  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:58.423528  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:58.539072  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:58.549960  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:58.553794  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:58.921862  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:59.038776  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:59.045430  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:59.045716  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:59.394131  744457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 13:51:59.425252  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:51:59.540017  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:51:59.546330  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:51:59.548351  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:51:59.923009  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:00.047893  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:00.051946  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:52:00.055974  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:00.424129  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:00.540875  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:00.547165  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:52:00.552458  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:00.926743  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:00.968325  744457 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.574146125s)
	W1006 13:52:00.968381  744457 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 13:52:00.968407  744457 retry.go:31] will retry after 16.175465339s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 13:52:01.041150  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:01.047264  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:52:01.048286  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:01.425829  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:01.559072  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:01.559386  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:01.561090  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:52:01.921257  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:02.042558  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:02.050778  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:52:02.051972  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:02.421473  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:02.538958  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:02.547507  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:52:02.549265  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:02.922138  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:03.039241  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:03.046476  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:52:03.047035  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:03.424124  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:03.541669  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:03.548409  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:52:03.552382  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:03.922288  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:04.039263  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:04.047044  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:52:04.050147  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:04.424305  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:04.543680  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:04.560734  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:04.565281  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:52:04.925732  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:05.039115  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:05.047890  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:05.047895  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:52:05.427175  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:05.540003  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:05.547509  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:52:05.549536  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:05.921966  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:06.041431  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:06.048721  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:06.054784  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:52:06.421410  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:06.948907  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:52:06.949035  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:06.949311  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:06.949470  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:07.040061  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:07.046868  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:52:07.046929  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:07.422264  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:07.539261  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:07.546857  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:52:07.547550  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:07.922308  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:08.040761  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:08.048438  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:08.051006  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:52:08.425743  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:08.539039  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:08.547437  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:08.547520  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:52:08.924370  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:09.041116  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:09.046684  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:52:09.049680  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:09.425360  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:09.541673  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:09.548714  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:52:09.551932  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:09.927919  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:10.042142  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:10.046420  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:52:10.051420  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:10.430477  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:10.540216  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:10.550245  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:52:10.550847  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:10.923381  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:11.039940  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:11.046124  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:52:11.046553  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:11.421727  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:11.542129  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:11.548260  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:11.551595  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:52:11.922175  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:12.039270  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:12.047189  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:52:12.048018  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:12.421509  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:12.538802  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:12.546782  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:12.546988  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:52:12.921514  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:13.038617  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:13.046511  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:52:13.048268  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:13.423415  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:13.540474  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:13.556031  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:52:13.556292  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:13.924536  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:14.040085  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:14.048373  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:52:14.048603  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:14.421224  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:14.539967  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:14.545420  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:14.550766  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:52:14.924607  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:15.182210  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:15.189642  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:15.196534  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:52:15.423439  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:15.545346  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:15.549861  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:15.555099  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 13:52:15.922767  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:16.040741  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:16.045709  744457 kapi.go:107] duration metric: took 55.503174945s to wait for kubernetes.io/minikube-addons=registry ...
	I1006 13:52:16.048804  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:16.423027  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:16.542219  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:16.550166  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:16.921956  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:17.039533  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:17.050496  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:17.144717  744457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 13:52:17.423779  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:17.540424  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:17.549179  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:17.923316  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:18.041671  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:18.048999  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:18.233105  744457 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.088328126s)
	W1006 13:52:18.233161  744457 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 13:52:18.233188  744457 retry.go:31] will retry after 15.996483748s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 13:52:18.424242  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:18.543690  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:18.548697  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:18.922826  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:19.039997  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:19.048397  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:19.423759  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:19.540802  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:19.740760  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:19.924475  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:20.045321  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:20.051490  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:20.420755  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:20.543685  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:20.549520  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:20.925601  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:21.039365  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:21.048806  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:21.426575  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:21.555896  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:21.656618  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:21.925069  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:22.039389  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:22.048415  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:22.423485  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:22.540682  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:22.548062  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:22.924811  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:23.039881  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:23.047617  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:23.421483  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:23.541083  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:23.548221  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:23.932889  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:24.041529  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:24.046066  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:24.425346  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:24.541877  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:24.549044  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:24.922924  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:25.041369  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:25.050072  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:25.426939  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:25.543878  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:25.554723  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:25.925496  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:26.043442  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:26.055890  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:26.422394  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:26.542220  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:26.642339  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:26.921296  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:27.047838  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:27.049973  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:27.421016  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:27.541792  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:27.551196  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:27.925754  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:28.042655  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:28.048324  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:28.422704  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:28.540168  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:28.546945  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:28.922247  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:29.039029  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:29.046699  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:29.420885  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:29.543053  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:29.554491  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:29.924416  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:30.049944  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:30.051574  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:30.422511  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:30.540935  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:30.547644  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:30.922153  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:31.041744  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:31.051452  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:31.429222  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:31.539724  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:31.545338  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:32.070120  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:32.072887  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:32.073780  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:32.422897  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:32.542337  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:32.548229  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:32.923418  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:33.041812  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:33.047987  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:33.423538  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:33.538328  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:33.546830  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:33.922015  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:34.040264  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:34.048423  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:34.230561  744457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 13:52:34.423749  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:34.541817  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:34.550065  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:34.929324  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:35.054226  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:35.054237  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:35.429739  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:35.546114  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:35.551157  744457 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 13:52:35.579426  744457 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.348787988s)
	W1006 13:52:35.579472  744457 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 13:52:35.579544  744457 main.go:141] libmachine: Making call to close driver server
	I1006 13:52:35.579563  744457 main.go:141] libmachine: (addons-395535) Calling .Close
	I1006 13:52:35.579915  744457 main.go:141] libmachine: (addons-395535) DBG | Closing plugin on server side
	I1006 13:52:35.579973  744457 main.go:141] libmachine: Successfully made call to close driver server
	I1006 13:52:35.579992  744457 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 13:52:35.580011  744457 main.go:141] libmachine: Making call to close driver server
	I1006 13:52:35.580023  744457 main.go:141] libmachine: (addons-395535) Calling .Close
	I1006 13:52:35.580346  744457 main.go:141] libmachine: Successfully made call to close driver server
	I1006 13:52:35.580367  744457 main.go:141] libmachine: Making call to close connection to plugin binary
	W1006 13:52:35.580504  744457 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1006 13:52:35.921762  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:36.041656  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:36.048236  744457 kapi.go:107] duration metric: took 1m15.505704211s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1006 13:52:36.422175  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:36.541165  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:36.923138  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:37.038467  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:37.422260  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:37.543552  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:37.924572  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:38.042050  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:38.423748  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:38.538810  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:38.925205  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:39.042669  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:39.421084  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:39.538452  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:39.931312  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 13:52:40.044684  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:40.475127  744457 kapi.go:107] duration metric: took 1m16.05780138s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1006 13:52:40.476727  744457 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-395535 cluster.
	I1006 13:52:40.478268  744457 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1006 13:52:40.479796  744457 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1006 13:52:40.538886  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:41.039754  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:41.542700  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:42.039202  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:42.544009  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:43.039240  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:43.540815  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:44.041225  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:44.545257  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:45.039282  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:45.539123  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:46.039792  744457 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 13:52:46.539688  744457 kapi.go:107] duration metric: took 1m24.505269111s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1006 13:52:46.541607  744457 out.go:179] * Enabled addons: registry-creds, amd-gpu-device-plugin, cloud-spanner, storage-provisioner, ingress-dns, nvidia-device-plugin, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1006 13:52:46.543089  744457 addons.go:514] duration metric: took 1m36.166352361s for enable addons: enabled=[registry-creds amd-gpu-device-plugin cloud-spanner storage-provisioner ingress-dns nvidia-device-plugin metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1006 13:52:46.543137  744457 start.go:246] waiting for cluster config update ...
	I1006 13:52:46.543168  744457 start.go:255] writing updated cluster config ...
	I1006 13:52:46.543457  744457 ssh_runner.go:195] Run: rm -f paused
	I1006 13:52:46.550542  744457 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1006 13:52:46.554242  744457 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6fw22" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 13:52:46.559813  744457 pod_ready.go:94] pod "coredns-66bc5c9577-6fw22" is "Ready"
	I1006 13:52:46.559838  744457 pod_ready.go:86] duration metric: took 5.570762ms for pod "coredns-66bc5c9577-6fw22" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 13:52:46.562104  744457 pod_ready.go:83] waiting for pod "etcd-addons-395535" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 13:52:46.568075  744457 pod_ready.go:94] pod "etcd-addons-395535" is "Ready"
	I1006 13:52:46.568097  744457 pod_ready.go:86] duration metric: took 5.973017ms for pod "etcd-addons-395535" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 13:52:46.570821  744457 pod_ready.go:83] waiting for pod "kube-apiserver-addons-395535" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 13:52:46.577222  744457 pod_ready.go:94] pod "kube-apiserver-addons-395535" is "Ready"
	I1006 13:52:46.577245  744457 pod_ready.go:86] duration metric: took 6.399773ms for pod "kube-apiserver-addons-395535" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 13:52:46.579452  744457 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-395535" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 13:52:46.955180  744457 pod_ready.go:94] pod "kube-controller-manager-addons-395535" is "Ready"
	I1006 13:52:46.955221  744457 pod_ready.go:86] duration metric: took 375.742254ms for pod "kube-controller-manager-addons-395535" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 13:52:47.156890  744457 pod_ready.go:83] waiting for pod "kube-proxy-8xc6l" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 13:52:47.554975  744457 pod_ready.go:94] pod "kube-proxy-8xc6l" is "Ready"
	I1006 13:52:47.555019  744457 pod_ready.go:86] duration metric: took 398.093114ms for pod "kube-proxy-8xc6l" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 13:52:47.754771  744457 pod_ready.go:83] waiting for pod "kube-scheduler-addons-395535" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 13:52:48.155669  744457 pod_ready.go:94] pod "kube-scheduler-addons-395535" is "Ready"
	I1006 13:52:48.155703  744457 pod_ready.go:86] duration metric: took 400.900912ms for pod "kube-scheduler-addons-395535" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 13:52:48.155715  744457 pod_ready.go:40] duration metric: took 1.605143167s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1006 13:52:48.202070  744457 start.go:623] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1006 13:52:48.204365  744457 out.go:179] * Done! kubectl is now configured to use "addons-395535" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 06 13:55:51 addons-395535 crio[825]: time="2025-10-06 13:55:51.592344354Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759758951592314876,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606636,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4dd67a5b-3c4c-45a1-907b-5f5572ddfb27 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 06 13:55:51 addons-395535 crio[825]: time="2025-10-06 13:55:51.593272414Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=67751a4b-192e-4fa1-984e-5fb260bbd823 name=/runtime.v1.RuntimeService/ListContainers
	Oct 06 13:55:51 addons-395535 crio[825]: time="2025-10-06 13:55:51.593351617Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=67751a4b-192e-4fa1-984e-5fb260bbd823 name=/runtime.v1.RuntimeService/ListContainers
	Oct 06 13:55:51 addons-395535 crio[825]: time="2025-10-06 13:55:51.593802640Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:75fbd7f36e623db806bd153d472c81d05bd70e575dfa0c47dc24b54d213b1b0f,PodSandboxId:6262dac8c8927ee67f78cb8e9a9d305a43cbd9659284548b065d83ba90a952ae,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1759758951455335760,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d498dc89-smmtw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 151f63c2-174d-4b74-8d7e-38e9f9e73550,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.p
orts: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f83887e92b9323041051253eed7a66ee99ca8e107d65acc9747d8242c0d4866,PodSandboxId:7a69cb8e34f3e0d1fec0a1ecf8d82a8f0345735948a9a89b6c13bad901cc3019,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9,State:CONTAINER_RUNNING,CreatedAt:1759758811122649576,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 795dc70e-a62b-4ffc-a2c2-c63baf69c4c2,},Annotations:map[string]string{io.kubernete
s.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eb21929b640e5b660fbcd08b1a6361d37cbaa67511d682e7af781b6fd9c4075,PodSandboxId:c84e3742d269f8df97d60592e68c836c39b56ead18cb753af3f02f953532d1c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1759758771713320309,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5379af33-1084-493d-a8
bf-f3ad31a70aeb,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6336ad52e3baf2cf7da7ac32e9063d4f20befe4d88363ed08fca964f43cb2f0f,PodSandboxId:99084f99d88f51d84f94927817342fbd507727bf3f699c919ef7980a5a15c307,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1759758754830290695,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-427tz,io.kubernetes.pod.namespace: ingress-nginx,io
.kubernetes.pod.uid: 95659ace-bf51-4f25-80d0-f300d6eb71d8,},Annotations:map[string]string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a2c70593b46f9875b217de54ccea79b3f9331a23bd0867e95d3e44c6a0f0d3f6,PodSandboxId:23941b6614ea053ca4b7ee27eda4bf1d83f8fafb480cbc7c6d463c1bc52aff42,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de
79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1759758742743260151,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-pzd59,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d1d5d38a-f3a1-42d7-97bc-c5e70daae68d,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81366001db9a8f9222d724c68dd28db7cf872a5daac7ead8f0f5a2dc1c2eec0a,PodSandboxId:5b86e83669802fe9af51cb73d015f243350fbec833f853b23dbc3fe6adf42ef7,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webh
ook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1759758741666329478,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-496k4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3b4390d1-a90e-4165-bc7e-6d94ffd07ee1,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ff9148ff0e4d8b0a340c31829afbb84d6d2528be2de0807809883f4a216a7b2,PodSandboxId:3ba067222b0ea7b2884dc3dcfb13c9e4192801cf32b6036d35a589986c79c6f1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:reg
istry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1759758737889989837,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-9zfgx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: da7aeb47-5df3-40cc-b96e-6b6b9e6070be,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7019b1bb436d91fc77d57e71443868e55521e25733f746111362baf21e3601ec,PodSandboxId:2b03e5ab6df3e6a32d992a127ff30f874490aab7271b0a69ddb800ef8bf59730,Metadata:&ContainerMetadata{Name:gadget,
Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9660a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1759758731803604665,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-rmw8b,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: eaee675f-3751-4815-9217-b41c12d637a9,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0feb4ec1b41abd52a40daa8d725bfdb63bfc157d3c5578ad11ce43d700ad3fc9,PodSandboxId:ad914aaaf
163862624cfb1071af59ad7968572d1df68600ad5bdd52f8bf79e83,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1759758719351907382,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a378a875-cf6b-48a9-81bb-81d31cb79673,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.p
od.terminationGracePeriod: 30,},},&Container{Id:b10ff075ce38f8d334a4991bb793c68ec845668cd78487e8fbd2c13dbbe4e370,PodSandboxId:9a78d7ae893ca8c97409fd2e16f9a9476fcf0e595e173011e8045010eab2776d,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1759758702141002309,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-c5865,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e17cce6-7fa4-4192-a773-8370967be6ba,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87bcdfcf86adf2668df683a1f953c3328a839aa00b56b8d07c5df6c3a00b1599,PodSandboxId:d57bf9128538b9fc8d8c81a1d9cb141b1bda83cdaa1f11447edbca630d62c253,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759758678672896862,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0a66d8d-0033-489d-b301-bf2fcc689b91,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1e05184ccc4103c0548cd684553afbc084731f98f6f545a716784b4f73ef630,PodSandboxId:97382fd1699a463a268e084d173c530a122262b4382f10d2cd33b8bfb332a073,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759758672546563221,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6fw22,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 460916b0-5247-49dc-8a2b-987d81276af1,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"c
ontainerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8838fff3bd090727ba4caee25649bf5953f07bdd4a1ca05dffb740ead5f92850,PodSandboxId:aa2a6de36c49e3815618f7e4039f2511095edfe88c6f0fd0352946ad7600f276,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759758671725751092,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8xc6l,io.kubernetes
.pod.namespace: kube-system,io.kubernetes.pod.uid: 39c56ecc-49e5-48fb-b175-8d023805f407,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06c564d47e8e35788e497750f1bab5ab985606b38d72e709b71d9f8936c32b95,PodSandboxId:cba1b5ea7c1e616de650ca9fac06ad15499426e7a1c92dd42d17a00b6a3d4039,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759758659639048107,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-395535,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: 505f3a4a41f6ad8edc79e323bfa8abe8,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28e19727d18d5ac31848fe288c77447cea4e3a152d0499200893939254f0dccf,PodSandboxId:901554f9da4eb4c8e9a1da2e032d2aa1d755d5017a0fbfc9a1caf0e1f069ab8b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759758659633610534,Labels:map[string]string{io.kubernetes.container.name
: etcd,io.kubernetes.pod.name: etcd-addons-395535,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a18f6a8b66e0dec09ba1a16d33efd4e6,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd955d28da969ef86ad05fc0892940dbf9dd4ac25ae9329d4b6f407cd3b357fb,PodSandboxId:9360798f7cc4b195acc485d14ed8b5db817d709db8059e4c4422a9d853c027fd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING
,CreatedAt:1759758659616263216,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-395535,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8af611c3e8a4bc03d10aec072bc784f6,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6fd596d90eedbf064739cce8c7b46c3d9eaef3d637aa98d45f4e436d8611c4f,PodSandboxId:1cae5c37c465456c249d67b99b2126de717702e674180da923f2abc02ffec664,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,Ru
ntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759758659624602938,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-395535,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02f8ad334ce562629ffb70e07c5d60e8,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=67751a4b-192e-4fa1-984e-5fb260bbd823 name=/runtime.v1.RuntimeService/ListContainers
	Oct 06 13:55:51 addons-395535 crio[825]: time="2025-10-06 13:55:51.638971336Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=099a2985-73b4-4532-a27d-cbd4c93bc1cb name=/runtime.v1.RuntimeService/Version
	Oct 06 13:55:51 addons-395535 crio[825]: time="2025-10-06 13:55:51.639315387Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=099a2985-73b4-4532-a27d-cbd4c93bc1cb name=/runtime.v1.RuntimeService/Version
	Oct 06 13:55:51 addons-395535 crio[825]: time="2025-10-06 13:55:51.641785493Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=13835a5f-25ab-474a-beab-6d92e9843b1c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 06 13:55:51 addons-395535 crio[825]: time="2025-10-06 13:55:51.643103164Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759758951643076465,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606636,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=13835a5f-25ab-474a-beab-6d92e9843b1c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 06 13:55:51 addons-395535 crio[825]: time="2025-10-06 13:55:51.644017213Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c92e9b82-7893-41ec-aa95-d180ccd48e84 name=/runtime.v1.RuntimeService/ListContainers
	Oct 06 13:55:51 addons-395535 crio[825]: time="2025-10-06 13:55:51.644277922Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c92e9b82-7893-41ec-aa95-d180ccd48e84 name=/runtime.v1.RuntimeService/ListContainers
	Oct 06 13:55:51 addons-395535 crio[825]: time="2025-10-06 13:55:51.645075949Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:75fbd7f36e623db806bd153d472c81d05bd70e575dfa0c47dc24b54d213b1b0f,PodSandboxId:6262dac8c8927ee67f78cb8e9a9d305a43cbd9659284548b065d83ba90a952ae,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1759758951455335760,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d498dc89-smmtw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 151f63c2-174d-4b74-8d7e-38e9f9e73550,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.p
orts: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f83887e92b9323041051253eed7a66ee99ca8e107d65acc9747d8242c0d4866,PodSandboxId:7a69cb8e34f3e0d1fec0a1ecf8d82a8f0345735948a9a89b6c13bad901cc3019,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9,State:CONTAINER_RUNNING,CreatedAt:1759758811122649576,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 795dc70e-a62b-4ffc-a2c2-c63baf69c4c2,},Annotations:map[string]string{io.kubernete
s.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eb21929b640e5b660fbcd08b1a6361d37cbaa67511d682e7af781b6fd9c4075,PodSandboxId:c84e3742d269f8df97d60592e68c836c39b56ead18cb753af3f02f953532d1c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1759758771713320309,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5379af33-1084-493d-a8
bf-f3ad31a70aeb,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6336ad52e3baf2cf7da7ac32e9063d4f20befe4d88363ed08fca964f43cb2f0f,PodSandboxId:99084f99d88f51d84f94927817342fbd507727bf3f699c919ef7980a5a15c307,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1759758754830290695,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-427tz,io.kubernetes.pod.namespace: ingress-nginx,io
.kubernetes.pod.uid: 95659ace-bf51-4f25-80d0-f300d6eb71d8,},Annotations:map[string]string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a2c70593b46f9875b217de54ccea79b3f9331a23bd0867e95d3e44c6a0f0d3f6,PodSandboxId:23941b6614ea053ca4b7ee27eda4bf1d83f8fafb480cbc7c6d463c1bc52aff42,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de
79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1759758742743260151,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-pzd59,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d1d5d38a-f3a1-42d7-97bc-c5e70daae68d,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81366001db9a8f9222d724c68dd28db7cf872a5daac7ead8f0f5a2dc1c2eec0a,PodSandboxId:5b86e83669802fe9af51cb73d015f243350fbec833f853b23dbc3fe6adf42ef7,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webh
ook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1759758741666329478,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-496k4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3b4390d1-a90e-4165-bc7e-6d94ffd07ee1,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ff9148ff0e4d8b0a340c31829afbb84d6d2528be2de0807809883f4a216a7b2,PodSandboxId:3ba067222b0ea7b2884dc3dcfb13c9e4192801cf32b6036d35a589986c79c6f1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:reg
istry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1759758737889989837,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-9zfgx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: da7aeb47-5df3-40cc-b96e-6b6b9e6070be,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7019b1bb436d91fc77d57e71443868e55521e25733f746111362baf21e3601ec,PodSandboxId:2b03e5ab6df3e6a32d992a127ff30f874490aab7271b0a69ddb800ef8bf59730,Metadata:&ContainerMetadata{Name:gadget,
Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9660a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1759758731803604665,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-rmw8b,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: eaee675f-3751-4815-9217-b41c12d637a9,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0feb4ec1b41abd52a40daa8d725bfdb63bfc157d3c5578ad11ce43d700ad3fc9,PodSandboxId:ad914aaaf
163862624cfb1071af59ad7968572d1df68600ad5bdd52f8bf79e83,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1759758719351907382,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a378a875-cf6b-48a9-81bb-81d31cb79673,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.p
od.terminationGracePeriod: 30,},},&Container{Id:b10ff075ce38f8d334a4991bb793c68ec845668cd78487e8fbd2c13dbbe4e370,PodSandboxId:9a78d7ae893ca8c97409fd2e16f9a9476fcf0e595e173011e8045010eab2776d,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1759758702141002309,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-c5865,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e17cce6-7fa4-4192-a773-8370967be6ba,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87bcdfcf86adf2668df683a1f953c3328a839aa00b56b8d07c5df6c3a00b1599,PodSandboxId:d57bf9128538b9fc8d8c81a1d9cb141b1bda83cdaa1f11447edbca630d62c253,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759758678672896862,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0a66d8d-0033-489d-b301-bf2fcc689b91,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1e05184ccc4103c0548cd684553afbc084731f98f6f545a716784b4f73ef630,PodSandboxId:97382fd1699a463a268e084d173c530a122262b4382f10d2cd33b8bfb332a073,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759758672546563221,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6fw22,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 460916b0-5247-49dc-8a2b-987d81276af1,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"c
ontainerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8838fff3bd090727ba4caee25649bf5953f07bdd4a1ca05dffb740ead5f92850,PodSandboxId:aa2a6de36c49e3815618f7e4039f2511095edfe88c6f0fd0352946ad7600f276,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759758671725751092,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8xc6l,io.kubernetes
.pod.namespace: kube-system,io.kubernetes.pod.uid: 39c56ecc-49e5-48fb-b175-8d023805f407,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06c564d47e8e35788e497750f1bab5ab985606b38d72e709b71d9f8936c32b95,PodSandboxId:cba1b5ea7c1e616de650ca9fac06ad15499426e7a1c92dd42d17a00b6a3d4039,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759758659639048107,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-395535,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: 505f3a4a41f6ad8edc79e323bfa8abe8,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28e19727d18d5ac31848fe288c77447cea4e3a152d0499200893939254f0dccf,PodSandboxId:901554f9da4eb4c8e9a1da2e032d2aa1d755d5017a0fbfc9a1caf0e1f069ab8b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759758659633610534,Labels:map[string]string{io.kubernetes.container.name
: etcd,io.kubernetes.pod.name: etcd-addons-395535,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a18f6a8b66e0dec09ba1a16d33efd4e6,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd955d28da969ef86ad05fc0892940dbf9dd4ac25ae9329d4b6f407cd3b357fb,PodSandboxId:9360798f7cc4b195acc485d14ed8b5db817d709db8059e4c4422a9d853c027fd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING
,CreatedAt:1759758659616263216,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-395535,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8af611c3e8a4bc03d10aec072bc784f6,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6fd596d90eedbf064739cce8c7b46c3d9eaef3d637aa98d45f4e436d8611c4f,PodSandboxId:1cae5c37c465456c249d67b99b2126de717702e674180da923f2abc02ffec664,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,Ru
ntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759758659624602938,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-395535,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02f8ad334ce562629ffb70e07c5d60e8,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c92e9b82-7893-41ec-aa95-d180ccd48e84 name=/runtime.v1.RuntimeService/ListContainers
	Oct 06 13:55:51 addons-395535 crio[825]: time="2025-10-06 13:55:51.685279210Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8f9e187a-3b0e-4c12-884d-8f5d6ac097bb name=/runtime.v1.RuntimeService/Version
	Oct 06 13:55:51 addons-395535 crio[825]: time="2025-10-06 13:55:51.685352193Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8f9e187a-3b0e-4c12-884d-8f5d6ac097bb name=/runtime.v1.RuntimeService/Version
	Oct 06 13:55:51 addons-395535 crio[825]: time="2025-10-06 13:55:51.687256841Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b1b6bde5-23dd-418c-b700-d0e3ffad7349 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 06 13:55:51 addons-395535 crio[825]: time="2025-10-06 13:55:51.688627482Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759758951688588763,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606636,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b1b6bde5-23dd-418c-b700-d0e3ffad7349 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 06 13:55:51 addons-395535 crio[825]: time="2025-10-06 13:55:51.689222603Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=43184925-8f4e-42e8-909a-c07e391f70d2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 06 13:55:51 addons-395535 crio[825]: time="2025-10-06 13:55:51.689286680Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=43184925-8f4e-42e8-909a-c07e391f70d2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 06 13:55:51 addons-395535 crio[825]: time="2025-10-06 13:55:51.689884497Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:75fbd7f36e623db806bd153d472c81d05bd70e575dfa0c47dc24b54d213b1b0f,PodSandboxId:6262dac8c8927ee67f78cb8e9a9d305a43cbd9659284548b065d83ba90a952ae,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1759758951455335760,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d498dc89-smmtw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 151f63c2-174d-4b74-8d7e-38e9f9e73550,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.p
orts: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f83887e92b9323041051253eed7a66ee99ca8e107d65acc9747d8242c0d4866,PodSandboxId:7a69cb8e34f3e0d1fec0a1ecf8d82a8f0345735948a9a89b6c13bad901cc3019,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9,State:CONTAINER_RUNNING,CreatedAt:1759758811122649576,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 795dc70e-a62b-4ffc-a2c2-c63baf69c4c2,},Annotations:map[string]string{io.kubernete
s.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eb21929b640e5b660fbcd08b1a6361d37cbaa67511d682e7af781b6fd9c4075,PodSandboxId:c84e3742d269f8df97d60592e68c836c39b56ead18cb753af3f02f953532d1c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1759758771713320309,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5379af33-1084-493d-a8
bf-f3ad31a70aeb,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6336ad52e3baf2cf7da7ac32e9063d4f20befe4d88363ed08fca964f43cb2f0f,PodSandboxId:99084f99d88f51d84f94927817342fbd507727bf3f699c919ef7980a5a15c307,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1759758754830290695,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-427tz,io.kubernetes.pod.namespace: ingress-nginx,io
.kubernetes.pod.uid: 95659ace-bf51-4f25-80d0-f300d6eb71d8,},Annotations:map[string]string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a2c70593b46f9875b217de54ccea79b3f9331a23bd0867e95d3e44c6a0f0d3f6,PodSandboxId:23941b6614ea053ca4b7ee27eda4bf1d83f8fafb480cbc7c6d463c1bc52aff42,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de
79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1759758742743260151,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-pzd59,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d1d5d38a-f3a1-42d7-97bc-c5e70daae68d,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81366001db9a8f9222d724c68dd28db7cf872a5daac7ead8f0f5a2dc1c2eec0a,PodSandboxId:5b86e83669802fe9af51cb73d015f243350fbec833f853b23dbc3fe6adf42ef7,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webh
ook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1759758741666329478,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-496k4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3b4390d1-a90e-4165-bc7e-6d94ffd07ee1,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ff9148ff0e4d8b0a340c31829afbb84d6d2528be2de0807809883f4a216a7b2,PodSandboxId:3ba067222b0ea7b2884dc3dcfb13c9e4192801cf32b6036d35a589986c79c6f1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:reg
istry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1759758737889989837,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-9zfgx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: da7aeb47-5df3-40cc-b96e-6b6b9e6070be,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7019b1bb436d91fc77d57e71443868e55521e25733f746111362baf21e3601ec,PodSandboxId:2b03e5ab6df3e6a32d992a127ff30f874490aab7271b0a69ddb800ef8bf59730,Metadata:&ContainerMetadata{Name:gadget,
Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9660a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1759758731803604665,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-rmw8b,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: eaee675f-3751-4815-9217-b41c12d637a9,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0feb4ec1b41abd52a40daa8d725bfdb63bfc157d3c5578ad11ce43d700ad3fc9,PodSandboxId:ad914aaaf
163862624cfb1071af59ad7968572d1df68600ad5bdd52f8bf79e83,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1759758719351907382,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a378a875-cf6b-48a9-81bb-81d31cb79673,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.p
od.terminationGracePeriod: 30,},},&Container{Id:b10ff075ce38f8d334a4991bb793c68ec845668cd78487e8fbd2c13dbbe4e370,PodSandboxId:9a78d7ae893ca8c97409fd2e16f9a9476fcf0e595e173011e8045010eab2776d,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1759758702141002309,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-c5865,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e17cce6-7fa4-4192-a773-8370967be6ba,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87bcdfcf86adf2668df683a1f953c3328a839aa00b56b8d07c5df6c3a00b1599,PodSandboxId:d57bf9128538b9fc8d8c81a1d9cb141b1bda83cdaa1f11447edbca630d62c253,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759758678672896862,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0a66d8d-0033-489d-b301-bf2fcc689b91,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1e05184ccc4103c0548cd684553afbc084731f98f6f545a716784b4f73ef630,PodSandboxId:97382fd1699a463a268e084d173c530a122262b4382f10d2cd33b8bfb332a073,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759758672546563221,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6fw22,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 460916b0-5247-49dc-8a2b-987d81276af1,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"c
ontainerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8838fff3bd090727ba4caee25649bf5953f07bdd4a1ca05dffb740ead5f92850,PodSandboxId:aa2a6de36c49e3815618f7e4039f2511095edfe88c6f0fd0352946ad7600f276,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759758671725751092,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8xc6l,io.kubernetes
.pod.namespace: kube-system,io.kubernetes.pod.uid: 39c56ecc-49e5-48fb-b175-8d023805f407,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06c564d47e8e35788e497750f1bab5ab985606b38d72e709b71d9f8936c32b95,PodSandboxId:cba1b5ea7c1e616de650ca9fac06ad15499426e7a1c92dd42d17a00b6a3d4039,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759758659639048107,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-395535,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: 505f3a4a41f6ad8edc79e323bfa8abe8,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28e19727d18d5ac31848fe288c77447cea4e3a152d0499200893939254f0dccf,PodSandboxId:901554f9da4eb4c8e9a1da2e032d2aa1d755d5017a0fbfc9a1caf0e1f069ab8b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759758659633610534,Labels:map[string]string{io.kubernetes.container.name
: etcd,io.kubernetes.pod.name: etcd-addons-395535,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a18f6a8b66e0dec09ba1a16d33efd4e6,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd955d28da969ef86ad05fc0892940dbf9dd4ac25ae9329d4b6f407cd3b357fb,PodSandboxId:9360798f7cc4b195acc485d14ed8b5db817d709db8059e4c4422a9d853c027fd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING
,CreatedAt:1759758659616263216,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-395535,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8af611c3e8a4bc03d10aec072bc784f6,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6fd596d90eedbf064739cce8c7b46c3d9eaef3d637aa98d45f4e436d8611c4f,PodSandboxId:1cae5c37c465456c249d67b99b2126de717702e674180da923f2abc02ffec664,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,Ru
ntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759758659624602938,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-395535,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02f8ad334ce562629ffb70e07c5d60e8,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=43184925-8f4e-42e8-909a-c07e391f70d2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 06 13:55:51 addons-395535 crio[825]: time="2025-10-06 13:55:51.732233306Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4ad34b44-60d9-466e-ae1c-3140e70fb1ed name=/runtime.v1.RuntimeService/Version
	Oct 06 13:55:51 addons-395535 crio[825]: time="2025-10-06 13:55:51.732739891Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4ad34b44-60d9-466e-ae1c-3140e70fb1ed name=/runtime.v1.RuntimeService/Version
	Oct 06 13:55:51 addons-395535 crio[825]: time="2025-10-06 13:55:51.734711117Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=876ea6c2-ed86-470c-99f9-18dce700104e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 06 13:55:51 addons-395535 crio[825]: time="2025-10-06 13:55:51.736773552Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759758951736744229,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606636,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=876ea6c2-ed86-470c-99f9-18dce700104e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 06 13:55:51 addons-395535 crio[825]: time="2025-10-06 13:55:51.737427267Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dfb29c41-47d4-454f-aaff-d07098a6c9d0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 06 13:55:51 addons-395535 crio[825]: time="2025-10-06 13:55:51.737630085Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dfb29c41-47d4-454f-aaff-d07098a6c9d0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 06 13:55:51 addons-395535 crio[825]: time="2025-10-06 13:55:51.738136864Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:75fbd7f36e623db806bd153d472c81d05bd70e575dfa0c47dc24b54d213b1b0f,PodSandboxId:6262dac8c8927ee67f78cb8e9a9d305a43cbd9659284548b065d83ba90a952ae,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1759758951455335760,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d498dc89-smmtw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 151f63c2-174d-4b74-8d7e-38e9f9e73550,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.p
orts: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f83887e92b9323041051253eed7a66ee99ca8e107d65acc9747d8242c0d4866,PodSandboxId:7a69cb8e34f3e0d1fec0a1ecf8d82a8f0345735948a9a89b6c13bad901cc3019,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9,State:CONTAINER_RUNNING,CreatedAt:1759758811122649576,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 795dc70e-a62b-4ffc-a2c2-c63baf69c4c2,},Annotations:map[string]string{io.kubernete
s.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eb21929b640e5b660fbcd08b1a6361d37cbaa67511d682e7af781b6fd9c4075,PodSandboxId:c84e3742d269f8df97d60592e68c836c39b56ead18cb753af3f02f953532d1c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1759758771713320309,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5379af33-1084-493d-a8
bf-f3ad31a70aeb,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6336ad52e3baf2cf7da7ac32e9063d4f20befe4d88363ed08fca964f43cb2f0f,PodSandboxId:99084f99d88f51d84f94927817342fbd507727bf3f699c919ef7980a5a15c307,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1759758754830290695,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-427tz,io.kubernetes.pod.namespace: ingress-nginx,io
.kubernetes.pod.uid: 95659ace-bf51-4f25-80d0-f300d6eb71d8,},Annotations:map[string]string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a2c70593b46f9875b217de54ccea79b3f9331a23bd0867e95d3e44c6a0f0d3f6,PodSandboxId:23941b6614ea053ca4b7ee27eda4bf1d83f8fafb480cbc7c6d463c1bc52aff42,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de
79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1759758742743260151,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-pzd59,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d1d5d38a-f3a1-42d7-97bc-c5e70daae68d,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81366001db9a8f9222d724c68dd28db7cf872a5daac7ead8f0f5a2dc1c2eec0a,PodSandboxId:5b86e83669802fe9af51cb73d015f243350fbec833f853b23dbc3fe6adf42ef7,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webh
ook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1759758741666329478,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-496k4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3b4390d1-a90e-4165-bc7e-6d94ffd07ee1,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ff9148ff0e4d8b0a340c31829afbb84d6d2528be2de0807809883f4a216a7b2,PodSandboxId:3ba067222b0ea7b2884dc3dcfb13c9e4192801cf32b6036d35a589986c79c6f1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:reg
istry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1759758737889989837,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-9zfgx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: da7aeb47-5df3-40cc-b96e-6b6b9e6070be,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7019b1bb436d91fc77d57e71443868e55521e25733f746111362baf21e3601ec,PodSandboxId:2b03e5ab6df3e6a32d992a127ff30f874490aab7271b0a69ddb800ef8bf59730,Metadata:&ContainerMetadata{Name:gadget,
Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9660a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1759758731803604665,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-rmw8b,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: eaee675f-3751-4815-9217-b41c12d637a9,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0feb4ec1b41abd52a40daa8d725bfdb63bfc157d3c5578ad11ce43d700ad3fc9,PodSandboxId:ad914aaaf
163862624cfb1071af59ad7968572d1df68600ad5bdd52f8bf79e83,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1759758719351907382,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a378a875-cf6b-48a9-81bb-81d31cb79673,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.p
od.terminationGracePeriod: 30,},},&Container{Id:b10ff075ce38f8d334a4991bb793c68ec845668cd78487e8fbd2c13dbbe4e370,PodSandboxId:9a78d7ae893ca8c97409fd2e16f9a9476fcf0e595e173011e8045010eab2776d,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1759758702141002309,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-c5865,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e17cce6-7fa4-4192-a773-8370967be6ba,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87bcdfcf86adf2668df683a1f953c3328a839aa00b56b8d07c5df6c3a00b1599,PodSandboxId:d57bf9128538b9fc8d8c81a1d9cb141b1bda83cdaa1f11447edbca630d62c253,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759758678672896862,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0a66d8d-0033-489d-b301-bf2fcc689b91,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1e05184ccc4103c0548cd684553afbc084731f98f6f545a716784b4f73ef630,PodSandboxId:97382fd1699a463a268e084d173c530a122262b4382f10d2cd33b8bfb332a073,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759758672546563221,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6fw22,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 460916b0-5247-49dc-8a2b-987d81276af1,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"c
ontainerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8838fff3bd090727ba4caee25649bf5953f07bdd4a1ca05dffb740ead5f92850,PodSandboxId:aa2a6de36c49e3815618f7e4039f2511095edfe88c6f0fd0352946ad7600f276,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759758671725751092,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8xc6l,io.kubernetes
.pod.namespace: kube-system,io.kubernetes.pod.uid: 39c56ecc-49e5-48fb-b175-8d023805f407,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06c564d47e8e35788e497750f1bab5ab985606b38d72e709b71d9f8936c32b95,PodSandboxId:cba1b5ea7c1e616de650ca9fac06ad15499426e7a1c92dd42d17a00b6a3d4039,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759758659639048107,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-395535,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: 505f3a4a41f6ad8edc79e323bfa8abe8,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28e19727d18d5ac31848fe288c77447cea4e3a152d0499200893939254f0dccf,PodSandboxId:901554f9da4eb4c8e9a1da2e032d2aa1d755d5017a0fbfc9a1caf0e1f069ab8b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759758659633610534,Labels:map[string]string{io.kubernetes.container.name
: etcd,io.kubernetes.pod.name: etcd-addons-395535,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a18f6a8b66e0dec09ba1a16d33efd4e6,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd955d28da969ef86ad05fc0892940dbf9dd4ac25ae9329d4b6f407cd3b357fb,PodSandboxId:9360798f7cc4b195acc485d14ed8b5db817d709db8059e4c4422a9d853c027fd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING
,CreatedAt:1759758659616263216,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-395535,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8af611c3e8a4bc03d10aec072bc784f6,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6fd596d90eedbf064739cce8c7b46c3d9eaef3d637aa98d45f4e436d8611c4f,PodSandboxId:1cae5c37c465456c249d67b99b2126de717702e674180da923f2abc02ffec664,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,Ru
ntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759758659624602938,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-395535,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02f8ad334ce562629ffb70e07c5d60e8,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dfb29c41-47d4-454f-aaff-d07098a6c9d0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	75fbd7f36e623       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        Less than a second ago   Running             hello-world-app           0                   6262dac8c8927       hello-world-app-5d498dc89-smmtw
	8f83887e92b93       docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8                              2 minutes ago            Running             nginx                     0                   7a69cb8e34f3e       nginx
	1eb21929b640e       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago            Running             busybox                   0                   c84e3742d269f       busybox
	6336ad52e3baf       registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd             3 minutes ago            Running             controller                0                   99084f99d88f5       ingress-nginx-controller-675c5ddd98-427tz
	a2c70593b46f9       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             3 minutes ago            Running             local-path-provisioner    0                   23941b6614ea0       local-path-provisioner-648f6765c9-pzd59
	81366001db9a8       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39   3 minutes ago            Exited              patch                     0                   5b86e83669802       ingress-nginx-admission-patch-496k4
	4ff9148ff0e4d       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39   3 minutes ago            Exited              create                    0                   3ba067222b0ea       ingress-nginx-admission-create-9zfgx
	7019b1bb436d9       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5            3 minutes ago            Running             gadget                    0                   2b03e5ab6df3e       gadget-rmw8b
	0feb4ec1b41ab       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               3 minutes ago            Running             minikube-ingress-dns      0                   ad914aaaf1638       kube-ingress-dns-minikube
	b10ff075ce38f       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago            Running             amd-gpu-device-plugin     0                   9a78d7ae893ca       amd-gpu-device-plugin-c5865
	87bcdfcf86adf       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago            Running             storage-provisioner       0                   d57bf9128538b       storage-provisioner
	f1e05184ccc41       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             4 minutes ago            Running             coredns                   0                   97382fd1699a4       coredns-66bc5c9577-6fw22
	8838fff3bd090       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                             4 minutes ago            Running             kube-proxy                0                   aa2a6de36c49e       kube-proxy-8xc6l
	06c564d47e8e3       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                             4 minutes ago            Running             kube-scheduler            0                   cba1b5ea7c1e6       kube-scheduler-addons-395535
	28e19727d18d5       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             4 minutes ago            Running             etcd                      0                   901554f9da4eb       etcd-addons-395535
	c6fd596d90eed       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                             4 minutes ago            Running             kube-controller-manager   0                   1cae5c37c4654       kube-controller-manager-addons-395535
	cd955d28da969       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                             4 minutes ago            Running             kube-apiserver            0                   9360798f7cc4b       kube-apiserver-addons-395535
	
	
	==> coredns [f1e05184ccc4103c0548cd684553afbc084731f98f6f545a716784b4f73ef630] <==
	[INFO] 10.244.0.9:48704 - 7313 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.002104261s
	[INFO] 10.244.0.9:48704 - 29802 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000127988s
	[INFO] 10.244.0.9:48704 - 53821 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000678593s
	[INFO] 10.244.0.9:48704 - 40136 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000099311s
	[INFO] 10.244.0.9:48704 - 21041 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000921483s
	[INFO] 10.244.0.9:48704 - 35105 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000344092s
	[INFO] 10.244.0.9:48704 - 40428 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000138157s
	[INFO] 10.244.0.9:53779 - 43649 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000167542s
	[INFO] 10.244.0.9:53779 - 44010 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000355089s
	[INFO] 10.244.0.9:34498 - 64260 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000174109s
	[INFO] 10.244.0.9:34498 - 64562 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000122159s
	[INFO] 10.244.0.9:42502 - 55963 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000164536s
	[INFO] 10.244.0.9:42502 - 56216 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000194899s
	[INFO] 10.244.0.9:39165 - 27575 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000127641s
	[INFO] 10.244.0.9:39165 - 27154 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000113602s
	[INFO] 10.244.0.23:59868 - 3245 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000460711s
	[INFO] 10.244.0.23:38604 - 24840 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00015104s
	[INFO] 10.244.0.23:41278 - 63778 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000119234s
	[INFO] 10.244.0.23:33247 - 64414 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000113585s
	[INFO] 10.244.0.23:51209 - 63638 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000133803s
	[INFO] 10.244.0.23:53916 - 28135 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.001172126s
	[INFO] 10.244.0.23:52111 - 43072 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00159587s
	[INFO] 10.244.0.23:36896 - 19384 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.004669014s
	[INFO] 10.244.0.27:51421 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000673457s
	[INFO] 10.244.0.27:49189 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000229966s
	
	
	==> describe nodes <==
	Name:               addons-395535
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-395535
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=512f8c40caa35ce8d82a76bc06907e2d11c89c81
	                    minikube.k8s.io/name=addons-395535
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_06T13_51_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-395535
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 06 Oct 2025 13:51:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-395535
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 06 Oct 2025 13:55:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 06 Oct 2025 13:54:10 +0000   Mon, 06 Oct 2025 13:51:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 06 Oct 2025 13:54:10 +0000   Mon, 06 Oct 2025 13:51:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 06 Oct 2025 13:54:10 +0000   Mon, 06 Oct 2025 13:51:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 06 Oct 2025 13:54:10 +0000   Mon, 06 Oct 2025 13:51:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.36
	  Hostname:    addons-395535
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	System Info:
	  Machine ID:                 e3dc62729d154eda996bfcf9fce5c454
	  System UUID:                e3dc6272-9d15-4eda-996b-fcf9fce5c454
	  Boot ID:                    598b7d71-d0e2-46a1-81c0-782a5a076b4b
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m4s
	  default                     hello-world-app-5d498dc89-smmtw              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  gadget                      gadget-rmw8b                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m34s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-427tz    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m33s
	  kube-system                 amd-gpu-device-plugin-c5865                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 coredns-66bc5c9577-6fw22                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m42s
	  kube-system                 etcd-addons-395535                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m47s
	  kube-system                 kube-apiserver-addons-395535                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m48s
	  kube-system                 kube-controller-manager-addons-395535        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m49s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m35s
	  kube-system                 kube-proxy-8xc6l                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m42s
	  kube-system                 kube-scheduler-addons-395535                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m48s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m36s
	  local-path-storage          local-path-provisioner-648f6765c9-pzd59      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m38s                  kube-proxy       
	  Normal  Starting                 4m54s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m54s (x8 over 4m54s)  kubelet          Node addons-395535 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m54s (x8 over 4m54s)  kubelet          Node addons-395535 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m54s (x7 over 4m54s)  kubelet          Node addons-395535 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m47s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m47s                  kubelet          Node addons-395535 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m47s                  kubelet          Node addons-395535 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m47s                  kubelet          Node addons-395535 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m46s                  kubelet          Node addons-395535 status is now: NodeReady
	  Normal  RegisteredNode           4m43s                  node-controller  Node addons-395535 event: Registered Node addons-395535 in Controller
	
	
	==> dmesg <==
	[  +0.520878] kauditd_printk_skb: 327 callbacks suppressed
	[  +0.079206] kauditd_printk_skb: 285 callbacks suppressed
	[  +0.882210] kauditd_printk_skb: 365 callbacks suppressed
	[ +13.946993] kauditd_printk_skb: 13 callbacks suppressed
	[  +6.508439] kauditd_printk_skb: 20 callbacks suppressed
	[ +12.041935] kauditd_printk_skb: 32 callbacks suppressed
	[Oct 6 13:52] kauditd_printk_skb: 32 callbacks suppressed
	[  +8.506168] kauditd_printk_skb: 26 callbacks suppressed
	[  +0.946396] kauditd_printk_skb: 59 callbacks suppressed
	[  +1.726563] kauditd_printk_skb: 126 callbacks suppressed
	[  +6.153976] kauditd_printk_skb: 111 callbacks suppressed
	[  +5.194096] kauditd_printk_skb: 47 callbacks suppressed
	[  +4.585700] kauditd_printk_skb: 41 callbacks suppressed
	[  +2.541843] kauditd_printk_skb: 53 callbacks suppressed
	[Oct 6 13:53] kauditd_printk_skb: 5 callbacks suppressed
	[  +6.040349] kauditd_printk_skb: 22 callbacks suppressed
	[  +4.908882] kauditd_printk_skb: 38 callbacks suppressed
	[  +0.000026] kauditd_printk_skb: 138 callbacks suppressed
	[  +0.897068] kauditd_printk_skb: 144 callbacks suppressed
	[  +0.000058] kauditd_printk_skb: 186 callbacks suppressed
	[  +4.893912] kauditd_printk_skb: 85 callbacks suppressed
	[  +8.796729] kauditd_printk_skb: 25 callbacks suppressed
	[  +0.000063] kauditd_printk_skb: 10 callbacks suppressed
	[Oct 6 13:54] kauditd_printk_skb: 41 callbacks suppressed
	[Oct 6 13:55] kauditd_printk_skb: 127 callbacks suppressed
	
	
	==> etcd [28e19727d18d5ac31848fe288c77447cea4e3a152d0499200893939254f0dccf] <==
	{"level":"warn","ts":"2025-10-06T13:52:15.174331Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"139.669622ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-06T13:52:15.174478Z","caller":"traceutil/trace.go:172","msg":"trace[458383048] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1029; }","duration":"139.928967ms","start":"2025-10-06T13:52:15.034540Z","end":"2025-10-06T13:52:15.174469Z","steps":["trace[458383048] 'agreement among raft nodes before linearized reading'  (duration: 139.574087ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-06T13:52:15.181798Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"138.555549ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-06T13:52:15.181837Z","caller":"traceutil/trace.go:172","msg":"trace[984748148] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1030; }","duration":"138.606446ms","start":"2025-10-06T13:52:15.043223Z","end":"2025-10-06T13:52:15.181829Z","steps":["trace[984748148] 'agreement among raft nodes before linearized reading'  (duration: 138.52915ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-06T13:52:15.182142Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"138.85037ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-06T13:52:15.182169Z","caller":"traceutil/trace.go:172","msg":"trace[202790415] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1031; }","duration":"138.880924ms","start":"2025-10-06T13:52:15.043282Z","end":"2025-10-06T13:52:15.182163Z","steps":["trace[202790415] 'agreement among raft nodes before linearized reading'  (duration: 138.836089ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-06T13:52:19.737064Z","caller":"traceutil/trace.go:172","msg":"trace[442211730] linearizableReadLoop","detail":"{readStateIndex:1084; appliedIndex:1084; }","duration":"194.177324ms","start":"2025-10-06T13:52:19.542858Z","end":"2025-10-06T13:52:19.737035Z","steps":["trace[442211730] 'read index received'  (duration: 194.169142ms)","trace[442211730] 'applied index is now lower than readState.Index'  (duration: 6.85µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-06T13:52:19.737222Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"194.345746ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-06T13:52:19.737245Z","caller":"traceutil/trace.go:172","msg":"trace[29936870] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1053; }","duration":"194.384314ms","start":"2025-10-06T13:52:19.542854Z","end":"2025-10-06T13:52:19.737238Z","steps":["trace[29936870] 'agreement among raft nodes before linearized reading'  (duration: 194.31631ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-06T13:52:19.739106Z","caller":"traceutil/trace.go:172","msg":"trace[269245989] transaction","detail":"{read_only:false; response_revision:1054; number_of_response:1; }","duration":"199.69003ms","start":"2025-10-06T13:52:19.539403Z","end":"2025-10-06T13:52:19.739093Z","steps":["trace[269245989] 'process raft request'  (duration: 197.927177ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-06T13:52:19.741138Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"180.513485ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-06T13:52:19.741190Z","caller":"traceutil/trace.go:172","msg":"trace[1519383635] range","detail":"{range_begin:/registry/csinodes; range_end:; response_count:0; response_revision:1054; }","duration":"180.576856ms","start":"2025-10-06T13:52:19.560606Z","end":"2025-10-06T13:52:19.741183Z","steps":["trace[1519383635] 'agreement among raft nodes before linearized reading'  (duration: 180.449883ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-06T13:52:19.745722Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"166.181728ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/runtimeclasses\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-06T13:52:19.745827Z","caller":"traceutil/trace.go:172","msg":"trace[1028625705] range","detail":"{range_begin:/registry/runtimeclasses; range_end:; response_count:0; response_revision:1054; }","duration":"166.292916ms","start":"2025-10-06T13:52:19.579522Z","end":"2025-10-06T13:52:19.745815Z","steps":["trace[1028625705] 'agreement among raft nodes before linearized reading'  (duration: 162.239122ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-06T13:52:24.227979Z","caller":"traceutil/trace.go:172","msg":"trace[855179951] transaction","detail":"{read_only:false; response_revision:1094; number_of_response:1; }","duration":"141.107651ms","start":"2025-10-06T13:52:24.086853Z","end":"2025-10-06T13:52:24.227961Z","steps":["trace[855179951] 'process raft request'  (duration: 140.549159ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-06T13:52:32.064528Z","caller":"traceutil/trace.go:172","msg":"trace[209131438] linearizableReadLoop","detail":"{readStateIndex:1170; appliedIndex:1170; }","duration":"146.266097ms","start":"2025-10-06T13:52:31.918232Z","end":"2025-10-06T13:52:32.064498Z","steps":["trace[209131438] 'read index received'  (duration: 146.260194ms)","trace[209131438] 'applied index is now lower than readState.Index'  (duration: 5.241µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-06T13:52:32.064690Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"146.425015ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-06T13:52:32.064723Z","caller":"traceutil/trace.go:172","msg":"trace[1493481789] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1136; }","duration":"146.489901ms","start":"2025-10-06T13:52:31.918228Z","end":"2025-10-06T13:52:32.064718Z","steps":["trace[1493481789] 'agreement among raft nodes before linearized reading'  (duration: 146.396914ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-06T13:52:32.066054Z","caller":"traceutil/trace.go:172","msg":"trace[1571442708] transaction","detail":"{read_only:false; response_revision:1137; number_of_response:1; }","duration":"225.983833ms","start":"2025-10-06T13:52:31.840056Z","end":"2025-10-06T13:52:32.066039Z","steps":["trace[1571442708] 'process raft request'  (duration: 224.831394ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-06T13:52:40.358660Z","caller":"traceutil/trace.go:172","msg":"trace[1397472719] transaction","detail":"{read_only:false; response_revision:1184; number_of_response:1; }","duration":"119.949596ms","start":"2025-10-06T13:52:40.238698Z","end":"2025-10-06T13:52:40.358647Z","steps":["trace[1397472719] 'process raft request'  (duration: 119.843363ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-06T13:52:50.392181Z","caller":"traceutil/trace.go:172","msg":"trace[921903109] transaction","detail":"{read_only:false; response_revision:1234; number_of_response:1; }","duration":"170.260159ms","start":"2025-10-06T13:52:50.221907Z","end":"2025-10-06T13:52:50.392168Z","steps":["trace[921903109] 'process raft request'  (duration: 170.160565ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-06T13:53:20.682750Z","caller":"traceutil/trace.go:172","msg":"trace[1017006549] transaction","detail":"{read_only:false; response_revision:1451; number_of_response:1; }","duration":"186.554617ms","start":"2025-10-06T13:53:20.496182Z","end":"2025-10-06T13:53:20.682736Z","steps":["trace[1017006549] 'process raft request'  (duration: 186.070686ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-06T13:53:20.854016Z","caller":"traceutil/trace.go:172","msg":"trace[737108924] transaction","detail":"{read_only:false; response_revision:1452; number_of_response:1; }","duration":"165.490512ms","start":"2025-10-06T13:53:20.688484Z","end":"2025-10-06T13:53:20.853974Z","steps":["trace[737108924] 'process raft request'  (duration: 161.152939ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-06T13:53:33.460116Z","caller":"traceutil/trace.go:172","msg":"trace[391348642] transaction","detail":"{read_only:false; response_revision:1589; number_of_response:1; }","duration":"237.281459ms","start":"2025-10-06T13:53:33.222821Z","end":"2025-10-06T13:53:33.460102Z","steps":["trace[391348642] 'process raft request'  (duration: 237.169009ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-06T13:53:43.195982Z","caller":"traceutil/trace.go:172","msg":"trace[1266480916] transaction","detail":"{read_only:false; response_revision:1647; number_of_response:1; }","duration":"123.196769ms","start":"2025-10-06T13:53:43.072773Z","end":"2025-10-06T13:53:43.195970Z","steps":["trace[1266480916] 'process raft request'  (duration: 122.790688ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:55:52 up 5 min,  0 users,  load average: 1.53, 1.78, 0.91
	Linux addons-395535 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [cd955d28da969ef86ad05fc0892940dbf9dd4ac25ae9329d4b6f407cd3b357fb] <==
	 > logger="UnhandledError"
	E1006 13:52:03.218426       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.41.174:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.41.174:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.41.174:443: connect: connection refused" logger="UnhandledError"
	E1006 13:52:03.219447       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.41.174:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.41.174:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.41.174:443: connect: connection refused" logger="UnhandledError"
	E1006 13:52:03.225729       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.41.174:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.41.174:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.41.174:443: connect: connection refused" logger="UnhandledError"
	I1006 13:52:03.365163       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1006 13:52:59.052101       1 conn.go:339] Error on socket receive: read tcp 192.168.39.36:8443->192.168.39.1:50496: use of closed network connection
	E1006 13:52:59.285082       1 conn.go:339] Error on socket receive: read tcp 192.168.39.36:8443->192.168.39.1:50508: use of closed network connection
	I1006 13:53:08.645757       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.105.168.84"}
	I1006 13:53:27.971939       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1006 13:53:28.173345       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.221.159"}
	I1006 13:53:44.125814       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1006 13:54:04.229996       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1006 13:54:08.467767       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1006 13:54:08.467901       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1006 13:54:08.505841       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1006 13:54:08.508130       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1006 13:54:08.508261       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1006 13:54:08.558192       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1006 13:54:08.558257       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1006 13:54:08.621868       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1006 13:54:08.621935       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1006 13:54:09.512823       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1006 13:54:09.623788       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W1006 13:54:09.709652       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I1006 13:55:50.221027       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.110.3.220"}
	
	
	==> kube-controller-manager [c6fd596d90eedbf064739cce8c7b46c3d9eaef3d637aa98d45f4e436d8611c4f] <==
	E1006 13:54:14.151223       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1006 13:54:19.004360       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1006 13:54:19.005787       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1006 13:54:19.482588       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1006 13:54:19.483868       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1006 13:54:20.078659       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1006 13:54:20.080011       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1006 13:54:25.895043       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1006 13:54:25.896140       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1006 13:54:28.452006       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1006 13:54:28.453113       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1006 13:54:31.057565       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1006 13:54:31.058866       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1006 13:54:46.569807       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1006 13:54:46.570928       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1006 13:54:49.453805       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1006 13:54:49.455078       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1006 13:54:55.822503       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1006 13:54:55.823655       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1006 13:55:20.135079       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1006 13:55:20.136195       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1006 13:55:32.209264       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1006 13:55:32.210469       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1006 13:55:42.152265       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1006 13:55:42.153748       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [8838fff3bd090727ba4caee25649bf5953f07bdd4a1ca05dffb740ead5f92850] <==
	I1006 13:51:12.922928       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1006 13:51:13.024481       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1006 13:51:13.024560       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.36"]
	E1006 13:51:13.024647       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1006 13:51:13.114449       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1006 13:51:13.114526       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1006 13:51:13.114582       1 server_linux.go:132] "Using iptables Proxier"
	I1006 13:51:13.142956       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1006 13:51:13.143827       1 server.go:527] "Version info" version="v1.34.1"
	I1006 13:51:13.143841       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 13:51:13.150553       1 config.go:200] "Starting service config controller"
	I1006 13:51:13.150734       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1006 13:51:13.150760       1 config.go:106] "Starting endpoint slice config controller"
	I1006 13:51:13.150993       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1006 13:51:13.151029       1 config.go:403] "Starting serviceCIDR config controller"
	I1006 13:51:13.151034       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1006 13:51:13.153994       1 config.go:309] "Starting node config controller"
	I1006 13:51:13.171504       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1006 13:51:13.171522       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1006 13:51:13.251793       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1006 13:51:13.251849       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1006 13:51:13.252841       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [06c564d47e8e35788e497750f1bab5ab985606b38d72e709b71d9f8936c32b95] <==
	E1006 13:51:02.636341       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1006 13:51:02.636471       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1006 13:51:02.636547       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1006 13:51:02.638089       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1006 13:51:02.638347       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1006 13:51:02.638565       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1006 13:51:02.638693       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1006 13:51:02.639793       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1006 13:51:02.639869       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1006 13:51:02.639928       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1006 13:51:02.639985       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1006 13:51:03.472829       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1006 13:51:03.497061       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1006 13:51:03.512929       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1006 13:51:03.557563       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1006 13:51:03.617917       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1006 13:51:03.628682       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1006 13:51:03.640995       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1006 13:51:03.674625       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1006 13:51:03.738736       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1006 13:51:03.820049       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1006 13:51:03.861502       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1006 13:51:03.879117       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1006 13:51:03.960638       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1006 13:51:06.027609       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 06 13:54:13 addons-395535 kubelet[1505]: I1006 13:54:13.337684    1505 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae6f0d17-540c-4901-80d7-b710cf82300e" path="/var/lib/kubelet/pods/ae6f0d17-540c-4901-80d7-b710cf82300e/volumes"
	Oct 06 13:54:15 addons-395535 kubelet[1505]: E1006 13:54:15.677170    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759758855676629431  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598030}  inodes_used:{value:201}}"
	Oct 06 13:54:15 addons-395535 kubelet[1505]: E1006 13:54:15.677217    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759758855676629431  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598030}  inodes_used:{value:201}}"
	Oct 06 13:54:25 addons-395535 kubelet[1505]: E1006 13:54:25.680458    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759758865679953959  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598030}  inodes_used:{value:201}}"
	Oct 06 13:54:25 addons-395535 kubelet[1505]: E1006 13:54:25.680512    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759758865679953959  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598030}  inodes_used:{value:201}}"
	Oct 06 13:54:28 addons-395535 kubelet[1505]: I1006 13:54:28.332358    1505 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-c5865" secret="" err="secret \"gcp-auth\" not found"
	Oct 06 13:54:35 addons-395535 kubelet[1505]: E1006 13:54:35.683727    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759758875683102982  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598030}  inodes_used:{value:201}}"
	Oct 06 13:54:35 addons-395535 kubelet[1505]: E1006 13:54:35.683776    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759758875683102982  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598030}  inodes_used:{value:201}}"
	Oct 06 13:54:45 addons-395535 kubelet[1505]: E1006 13:54:45.687415    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759758885686791070  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598030}  inodes_used:{value:201}}"
	Oct 06 13:54:45 addons-395535 kubelet[1505]: E1006 13:54:45.687465    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759758885686791070  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598030}  inodes_used:{value:201}}"
	Oct 06 13:54:55 addons-395535 kubelet[1505]: E1006 13:54:55.690742    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759758895690052485  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598030}  inodes_used:{value:201}}"
	Oct 06 13:54:55 addons-395535 kubelet[1505]: E1006 13:54:55.690788    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759758895690052485  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598030}  inodes_used:{value:201}}"
	Oct 06 13:55:05 addons-395535 kubelet[1505]: E1006 13:55:05.694026    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759758905693538976  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598030}  inodes_used:{value:201}}"
	Oct 06 13:55:05 addons-395535 kubelet[1505]: E1006 13:55:05.694052    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759758905693538976  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598030}  inodes_used:{value:201}}"
	Oct 06 13:55:14 addons-395535 kubelet[1505]: I1006 13:55:14.331242    1505 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 06 13:55:15 addons-395535 kubelet[1505]: E1006 13:55:15.698846    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759758915697868230  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598030}  inodes_used:{value:201}}"
	Oct 06 13:55:15 addons-395535 kubelet[1505]: E1006 13:55:15.698942    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759758915697868230  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598030}  inodes_used:{value:201}}"
	Oct 06 13:55:25 addons-395535 kubelet[1505]: E1006 13:55:25.702205    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759758925701812900  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598030}  inodes_used:{value:201}}"
	Oct 06 13:55:25 addons-395535 kubelet[1505]: E1006 13:55:25.702229    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759758925701812900  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598030}  inodes_used:{value:201}}"
	Oct 06 13:55:35 addons-395535 kubelet[1505]: E1006 13:55:35.705703    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759758935705042630  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598030}  inodes_used:{value:201}}"
	Oct 06 13:55:35 addons-395535 kubelet[1505]: E1006 13:55:35.705756    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759758935705042630  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598030}  inodes_used:{value:201}}"
	Oct 06 13:55:45 addons-395535 kubelet[1505]: E1006 13:55:45.709086    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759758945708547749  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598030}  inodes_used:{value:201}}"
	Oct 06 13:55:45 addons-395535 kubelet[1505]: E1006 13:55:45.709183    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759758945708547749  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598030}  inodes_used:{value:201}}"
	Oct 06 13:55:50 addons-395535 kubelet[1505]: I1006 13:55:50.218490    1505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdxxf\" (UniqueName: \"kubernetes.io/projected/151f63c2-174d-4b74-8d7e-38e9f9e73550-kube-api-access-fdxxf\") pod \"hello-world-app-5d498dc89-smmtw\" (UID: \"151f63c2-174d-4b74-8d7e-38e9f9e73550\") " pod="default/hello-world-app-5d498dc89-smmtw"
	Oct 06 13:55:52 addons-395535 kubelet[1505]: I1006 13:55:52.318458    1505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-smmtw" podStartSLOduration=1.600320199 podStartE2EDuration="2.318434918s" podCreationTimestamp="2025-10-06 13:55:50 +0000 UTC" firstStartedPulling="2025-10-06 13:55:50.711517661 +0000 UTC m=+285.549488093" lastFinishedPulling="2025-10-06 13:55:51.429632382 +0000 UTC m=+286.267602812" observedRunningTime="2025-10-06 13:55:52.316670451 +0000 UTC m=+287.154640898" watchObservedRunningTime="2025-10-06 13:55:52.318434918 +0000 UTC m=+287.156405364"
	
	
	==> storage-provisioner [87bcdfcf86adf2668df683a1f953c3328a839aa00b56b8d07c5df6c3a00b1599] <==
	W1006 13:55:27.847393       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 13:55:29.851677       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 13:55:29.862589       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 13:55:31.866619       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 13:55:31.875821       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 13:55:33.880162       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 13:55:33.886573       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 13:55:35.890984       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 13:55:35.900647       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 13:55:37.904703       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 13:55:37.910245       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 13:55:39.914130       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 13:55:39.922833       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 13:55:41.926995       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 13:55:41.932561       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 13:55:43.935909       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 13:55:43.942245       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 13:55:45.947574       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 13:55:45.955105       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 13:55:47.958296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 13:55:47.964762       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 13:55:49.969146       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 13:55:49.979787       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 13:55:51.985287       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 13:55:51.991593       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-395535 -n addons-395535
helpers_test.go:269: (dbg) Run:  kubectl --context addons-395535 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-9zfgx ingress-nginx-admission-patch-496k4
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-395535 describe pod ingress-nginx-admission-create-9zfgx ingress-nginx-admission-patch-496k4
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-395535 describe pod ingress-nginx-admission-create-9zfgx ingress-nginx-admission-patch-496k4: exit status 1 (64.961499ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-9zfgx" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-496k4" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-395535 describe pod ingress-nginx-admission-create-9zfgx ingress-nginx-admission-patch-496k4: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-395535 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-395535 addons disable ingress-dns --alsologtostderr -v=1: (2.310524177s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-395535 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-395535 addons disable ingress --alsologtostderr -v=1: (7.887211924s)
--- FAIL: TestAddons/parallel/Ingress (155.59s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (13.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-561811 /tmp/TestFunctionalparallelMountCmdspecific-port3380696694/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-561811 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (230.452325ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1006 14:02:38.834025  743851 retry.go:31] will retry after 735.770272ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-561811 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (230.327519ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1006 14:02:39.801262  743851 retry.go:31] will retry after 522.269829ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-561811 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (229.883803ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1006 14:02:40.554411  743851 retry.go:31] will retry after 1.106527363s: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-561811 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (210.578887ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1006 14:02:41.872484  743851 retry.go:31] will retry after 1.611697539s: exit status 1
2025/10/06 14:02:41 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-561811 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (199.9926ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1006 14:02:43.684465  743851 retry.go:31] will retry after 1.581321015s: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-561811 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (194.479813ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1006 14:02:45.461349  743851 retry.go:31] will retry after 5.669474098s: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-561811 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (189.703068ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:253: /mount-9p did not appear within 12.718242659s: exit status 1
functional_test_mount_test.go:220: "TestFunctional/parallel/MountCmd/specific-port" failed, getting debug info...
functional_test_mount_test.go:221: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:221: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-561811 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (195.447037ms)

                                                
                                                
-- stdout --
	total 0
	drwxr-xr-x  2 root root  40 Oct  6 14:02 .
	drwxr-xr-x 20 root root 560 Oct  6 14:02 ..
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:223: debugging command "out/minikube-linux-amd64 -p functional-561811 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-561811 ssh "sudo umount -f /mount-9p": exit status 1 (200.659351ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-561811 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-561811 /tmp/TestFunctionalparallelMountCmdspecific-port3380696694/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:234: (dbg) [out/minikube-linux-amd64 mount -p functional-561811 /tmp/TestFunctionalparallelMountCmdspecific-port3380696694/001:/mount-9p --alsologtostderr -v=1 --port 46464] stdout:

                                                
                                                

                                                
                                                
functional_test_mount_test.go:234: (dbg) [out/minikube-linux-amd64 mount -p functional-561811 /tmp/TestFunctionalparallelMountCmdspecific-port3380696694/001:/mount-9p --alsologtostderr -v=1 --port 46464] stderr:
I1006 14:02:38.649520  752251 out.go:360] Setting OutFile to fd 1 ...
I1006 14:02:38.649685  752251 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1006 14:02:38.649697  752251 out.go:374] Setting ErrFile to fd 2...
I1006 14:02:38.649701  752251 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1006 14:02:38.649906  752251 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-739942/.minikube/bin
I1006 14:02:38.650195  752251 mustload.go:65] Loading cluster: functional-561811
I1006 14:02:38.650577  752251 config.go:182] Loaded profile config "functional-561811": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1006 14:02:38.651833  752251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1006 14:02:38.651926  752251 main.go:141] libmachine: Launching plugin server for driver kvm2
I1006 14:02:38.668166  752251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43269
I1006 14:02:38.668850  752251 main.go:141] libmachine: () Calling .GetVersion
I1006 14:02:38.669381  752251 main.go:141] libmachine: Using API Version  1
I1006 14:02:38.669403  752251 main.go:141] libmachine: () Calling .SetConfigRaw
I1006 14:02:38.669849  752251 main.go:141] libmachine: () Calling .GetMachineName
I1006 14:02:38.670099  752251 main.go:141] libmachine: (functional-561811) Calling .GetState
I1006 14:02:38.672175  752251 host.go:66] Checking if "functional-561811" exists ...
I1006 14:02:38.672641  752251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1006 14:02:38.672699  752251 main.go:141] libmachine: Launching plugin server for driver kvm2
I1006 14:02:38.687146  752251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36413
I1006 14:02:38.687668  752251 main.go:141] libmachine: () Calling .GetVersion
I1006 14:02:38.688095  752251 main.go:141] libmachine: Using API Version  1
I1006 14:02:38.688118  752251 main.go:141] libmachine: () Calling .SetConfigRaw
I1006 14:02:38.688609  752251 main.go:141] libmachine: () Calling .GetMachineName
I1006 14:02:38.688826  752251 main.go:141] libmachine: (functional-561811) Calling .DriverName
I1006 14:02:38.689042  752251 main.go:141] libmachine: (functional-561811) Calling .DriverName
I1006 14:02:38.689218  752251 main.go:141] libmachine: (functional-561811) Calling .GetIP
I1006 14:02:38.693548  752251 main.go:141] libmachine: (functional-561811) DBG | domain functional-561811 has defined MAC address 52:54:00:c9:42:d5 in network mk-functional-561811
I1006 14:02:38.694150  752251 main.go:141] libmachine: (functional-561811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:42:d5", ip: ""} in network mk-functional-561811: {Iface:virbr1 ExpiryTime:2025-10-06 15:00:02 +0000 UTC Type:0 Mac:52:54:00:c9:42:d5 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:functional-561811 Clientid:01:52:54:00:c9:42:d5}
I1006 14:02:38.694189  752251 main.go:141] libmachine: (functional-561811) DBG | domain functional-561811 has defined IP address 192.168.39.208 and MAC address 52:54:00:c9:42:d5 in network mk-functional-561811
I1006 14:02:38.699312  752251 out.go:203] 
W1006 14:02:38.700871  752251 out.go:285] X Exiting due to IF_MOUNT_PORT: Error finding port for mount: Error accessing port 46464
X Exiting due to IF_MOUNT_PORT: Error finding port for mount: Error accessing port 46464
W1006 14:02:38.700891  752251 out.go:285] * 
* 
W1006 14:02:38.708046  752251 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_mount_9b5dbe5b7e959fd72b948ca11fe2bb87a2de3a45_0.log                   │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_mount_9b5dbe5b7e959fd72b948ca11fe2bb87a2de3a45_0.log                   │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1006 14:02:38.709969  752251 out.go:203] 
--- FAIL: TestFunctional/parallel/MountCmd/specific-port (13.22s)

                                                
                                    
x
+
TestPreload (137.13s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-907615 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0
E1006 14:40:06.657490  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/functional-561811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-907615 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0: (1m14.238155051s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-907615 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-907615 image pull gcr.io/k8s-minikube/busybox: (2.402872129s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-907615
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-907615: (8.389975516s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-907615 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-907615 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (49.022846326s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-907615 image list
preload_test.go:75: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10
	registry.k8s.io/kube-scheduler:v1.32.0
	registry.k8s.io/kube-proxy:v1.32.0
	registry.k8s.io/kube-controller-manager:v1.32.0
	registry.k8s.io/kube-apiserver:v1.32.0
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20241108-5c6d2daf

                                                
                                                
-- /stdout --
panic.go:636: *** TestPreload FAILED at 2025-10-06 14:41:53.542429834 +0000 UTC m=+3100.790061613
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-907615 -n test-preload-907615
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-907615 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p test-preload-907615 logs -n 25: (1.243149579s)
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                        ARGS                                                                                         │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-962847 ssh -n multinode-962847-m03 sudo cat /home/docker/cp-test.txt                                                                                                      │ multinode-962847     │ jenkins │ v1.37.0 │ 06 Oct 25 14:29 UTC │ 06 Oct 25 14:29 UTC │
	│ ssh     │ multinode-962847 ssh -n multinode-962847 sudo cat /home/docker/cp-test_multinode-962847-m03_multinode-962847.txt                                                                    │ multinode-962847     │ jenkins │ v1.37.0 │ 06 Oct 25 14:29 UTC │ 06 Oct 25 14:29 UTC │
	│ cp      │ multinode-962847 cp multinode-962847-m03:/home/docker/cp-test.txt multinode-962847-m02:/home/docker/cp-test_multinode-962847-m03_multinode-962847-m02.txt                           │ multinode-962847     │ jenkins │ v1.37.0 │ 06 Oct 25 14:29 UTC │ 06 Oct 25 14:29 UTC │
	│ ssh     │ multinode-962847 ssh -n multinode-962847-m03 sudo cat /home/docker/cp-test.txt                                                                                                      │ multinode-962847     │ jenkins │ v1.37.0 │ 06 Oct 25 14:29 UTC │ 06 Oct 25 14:29 UTC │
	│ ssh     │ multinode-962847 ssh -n multinode-962847-m02 sudo cat /home/docker/cp-test_multinode-962847-m03_multinode-962847-m02.txt                                                            │ multinode-962847     │ jenkins │ v1.37.0 │ 06 Oct 25 14:29 UTC │ 06 Oct 25 14:29 UTC │
	│ node    │ multinode-962847 node stop m03                                                                                                                                                      │ multinode-962847     │ jenkins │ v1.37.0 │ 06 Oct 25 14:29 UTC │ 06 Oct 25 14:29 UTC │
	│ node    │ multinode-962847 node start m03 -v=5 --alsologtostderr                                                                                                                              │ multinode-962847     │ jenkins │ v1.37.0 │ 06 Oct 25 14:29 UTC │ 06 Oct 25 14:29 UTC │
	│ node    │ list -p multinode-962847                                                                                                                                                            │ multinode-962847     │ jenkins │ v1.37.0 │ 06 Oct 25 14:29 UTC │                     │
	│ stop    │ -p multinode-962847                                                                                                                                                                 │ multinode-962847     │ jenkins │ v1.37.0 │ 06 Oct 25 14:29 UTC │ 06 Oct 25 14:32 UTC │
	│ start   │ -p multinode-962847 --wait=true -v=5 --alsologtostderr                                                                                                                              │ multinode-962847     │ jenkins │ v1.37.0 │ 06 Oct 25 14:32 UTC │ 06 Oct 25 14:34 UTC │
	│ node    │ list -p multinode-962847                                                                                                                                                            │ multinode-962847     │ jenkins │ v1.37.0 │ 06 Oct 25 14:34 UTC │                     │
	│ node    │ multinode-962847 node delete m03                                                                                                                                                    │ multinode-962847     │ jenkins │ v1.37.0 │ 06 Oct 25 14:34 UTC │ 06 Oct 25 14:34 UTC │
	│ stop    │ multinode-962847 stop                                                                                                                                                               │ multinode-962847     │ jenkins │ v1.37.0 │ 06 Oct 25 14:34 UTC │ 06 Oct 25 14:37 UTC │
	│ start   │ -p multinode-962847 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                          │ multinode-962847     │ jenkins │ v1.37.0 │ 06 Oct 25 14:37 UTC │ 06 Oct 25 14:38 UTC │
	│ node    │ list -p multinode-962847                                                                                                                                                            │ multinode-962847     │ jenkins │ v1.37.0 │ 06 Oct 25 14:38 UTC │                     │
	│ start   │ -p multinode-962847-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                         │ multinode-962847-m02 │ jenkins │ v1.37.0 │ 06 Oct 25 14:38 UTC │                     │
	│ start   │ -p multinode-962847-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                         │ multinode-962847-m03 │ jenkins │ v1.37.0 │ 06 Oct 25 14:38 UTC │ 06 Oct 25 14:39 UTC │
	│ node    │ add -p multinode-962847                                                                                                                                                             │ multinode-962847     │ jenkins │ v1.37.0 │ 06 Oct 25 14:39 UTC │                     │
	│ delete  │ -p multinode-962847-m03                                                                                                                                                             │ multinode-962847-m03 │ jenkins │ v1.37.0 │ 06 Oct 25 14:39 UTC │ 06 Oct 25 14:39 UTC │
	│ delete  │ -p multinode-962847                                                                                                                                                                 │ multinode-962847     │ jenkins │ v1.37.0 │ 06 Oct 25 14:39 UTC │ 06 Oct 25 14:39 UTC │
	│ start   │ -p test-preload-907615 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0 │ test-preload-907615  │ jenkins │ v1.37.0 │ 06 Oct 25 14:39 UTC │ 06 Oct 25 14:40 UTC │
	│ image   │ test-preload-907615 image pull gcr.io/k8s-minikube/busybox                                                                                                                          │ test-preload-907615  │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ stop    │ -p test-preload-907615                                                                                                                                                              │ test-preload-907615  │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:41 UTC │
	│ start   │ -p test-preload-907615 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                         │ test-preload-907615  │ jenkins │ v1.37.0 │ 06 Oct 25 14:41 UTC │ 06 Oct 25 14:41 UTC │
	│ image   │ test-preload-907615 image list                                                                                                                                                      │ test-preload-907615  │ jenkins │ v1.37.0 │ 06 Oct 25 14:41 UTC │ 06 Oct 25 14:41 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 14:41:04
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 14:41:04.328213  773552 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:41:04.328526  773552 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:41:04.328537  773552 out.go:374] Setting ErrFile to fd 2...
	I1006 14:41:04.328542  773552 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:41:04.328775  773552 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-739942/.minikube/bin
	I1006 14:41:04.329242  773552 out.go:368] Setting JSON to false
	I1006 14:41:04.330231  773552 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":15815,"bootTime":1759745849,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 14:41:04.330348  773552 start.go:140] virtualization: kvm guest
	I1006 14:41:04.332458  773552 out.go:179] * [test-preload-907615] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1006 14:41:04.333947  773552 notify.go:220] Checking for updates...
	I1006 14:41:04.334004  773552 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 14:41:04.335507  773552 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 14:41:04.336881  773552 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-739942/kubeconfig
	I1006 14:41:04.338105  773552 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-739942/.minikube
	I1006 14:41:04.339437  773552 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1006 14:41:04.341092  773552 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 14:41:04.342816  773552 config.go:182] Loaded profile config "test-preload-907615": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1006 14:41:04.343258  773552 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 14:41:04.343313  773552 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 14:41:04.357578  773552 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33155
	I1006 14:41:04.358128  773552 main.go:141] libmachine: () Calling .GetVersion
	I1006 14:41:04.358794  773552 main.go:141] libmachine: Using API Version  1
	I1006 14:41:04.358825  773552 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 14:41:04.359245  773552 main.go:141] libmachine: () Calling .GetMachineName
	I1006 14:41:04.359466  773552 main.go:141] libmachine: (test-preload-907615) Calling .DriverName
	I1006 14:41:04.361274  773552 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1006 14:41:04.362443  773552 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 14:41:04.362799  773552 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 14:41:04.362850  773552 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 14:41:04.376085  773552 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43611
	I1006 14:41:04.376567  773552 main.go:141] libmachine: () Calling .GetVersion
	I1006 14:41:04.377040  773552 main.go:141] libmachine: Using API Version  1
	I1006 14:41:04.377067  773552 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 14:41:04.377432  773552 main.go:141] libmachine: () Calling .GetMachineName
	I1006 14:41:04.377646  773552 main.go:141] libmachine: (test-preload-907615) Calling .DriverName
	I1006 14:41:04.410360  773552 out.go:179] * Using the kvm2 driver based on existing profile
	I1006 14:41:04.411757  773552 start.go:304] selected driver: kvm2
	I1006 14:41:04.411771  773552 start.go:924] validating driver "kvm2" against &{Name:test-preload-907615 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.32.0 ClusterName:test-preload-907615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.101 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:41:04.411874  773552 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 14:41:04.412646  773552 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 14:41:04.412728  773552 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21701-739942/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1006 14:41:04.426439  773552 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1006 14:41:04.426466  773552 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21701-739942/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1006 14:41:04.441492  773552 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1006 14:41:04.441895  773552 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 14:41:04.441939  773552 cni.go:84] Creating CNI manager for ""
	I1006 14:41:04.441986  773552 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1006 14:41:04.442043  773552 start.go:348] cluster config:
	{Name:test-preload-907615 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-907615 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.101 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:41:04.442135  773552 iso.go:125] acquiring lock: {Name:mk8de6812bb58933af0bc6eb1d955bf118a3bcec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 14:41:04.443761  773552 out.go:179] * Starting "test-preload-907615" primary control-plane node in "test-preload-907615" cluster
	I1006 14:41:04.444891  773552 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1006 14:41:04.470136  773552 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1006 14:41:04.470198  773552 cache.go:58] Caching tarball of preloaded images
	I1006 14:41:04.470397  773552 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1006 14:41:04.472072  773552 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I1006 14:41:04.473332  773552 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1006 14:41:04.499944  773552 preload.go:290] Got checksum from GCS API "2acdb4dde52794f2167c79dcee7507ae"
	I1006 14:41:04.500008  773552 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21701-739942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1006 14:41:07.165771  773552 cache.go:61] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1006 14:41:07.165909  773552 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/test-preload-907615/config.json ...
	I1006 14:41:07.166154  773552 start.go:360] acquireMachinesLock for test-preload-907615: {Name:mkc5be1cfc8fcefa1839aef4c67a376cc5095e30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1006 14:41:07.166223  773552 start.go:364] duration metric: took 44.224µs to acquireMachinesLock for "test-preload-907615"
	I1006 14:41:07.166236  773552 start.go:96] Skipping create...Using existing machine configuration
	I1006 14:41:07.166242  773552 fix.go:54] fixHost starting: 
	I1006 14:41:07.166507  773552 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 14:41:07.166548  773552 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 14:41:07.180124  773552 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39067
	I1006 14:41:07.180637  773552 main.go:141] libmachine: () Calling .GetVersion
	I1006 14:41:07.181114  773552 main.go:141] libmachine: Using API Version  1
	I1006 14:41:07.181144  773552 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 14:41:07.181516  773552 main.go:141] libmachine: () Calling .GetMachineName
	I1006 14:41:07.181733  773552 main.go:141] libmachine: (test-preload-907615) Calling .DriverName
	I1006 14:41:07.181882  773552 main.go:141] libmachine: (test-preload-907615) Calling .GetState
	I1006 14:41:07.183664  773552 fix.go:112] recreateIfNeeded on test-preload-907615: state=Stopped err=<nil>
	I1006 14:41:07.183707  773552 main.go:141] libmachine: (test-preload-907615) Calling .DriverName
	W1006 14:41:07.183882  773552 fix.go:138] unexpected machine state, will restart: <nil>
	I1006 14:41:07.185716  773552 out.go:252] * Restarting existing kvm2 VM for "test-preload-907615" ...
	I1006 14:41:07.185754  773552 main.go:141] libmachine: (test-preload-907615) Calling .Start
	I1006 14:41:07.185927  773552 main.go:141] libmachine: (test-preload-907615) starting domain...
	I1006 14:41:07.185949  773552 main.go:141] libmachine: (test-preload-907615) ensuring networks are active...
	I1006 14:41:07.186868  773552 main.go:141] libmachine: (test-preload-907615) Ensuring network default is active
	I1006 14:41:07.187319  773552 main.go:141] libmachine: (test-preload-907615) Ensuring network mk-test-preload-907615 is active
	I1006 14:41:07.187983  773552 main.go:141] libmachine: (test-preload-907615) getting domain XML...
	I1006 14:41:07.189289  773552 main.go:141] libmachine: (test-preload-907615) DBG | starting domain XML:
	I1006 14:41:07.189309  773552 main.go:141] libmachine: (test-preload-907615) DBG | <domain type='kvm'>
	I1006 14:41:07.189316  773552 main.go:141] libmachine: (test-preload-907615) DBG |   <name>test-preload-907615</name>
	I1006 14:41:07.189322  773552 main.go:141] libmachine: (test-preload-907615) DBG |   <uuid>9671fd2e-e0eb-461e-9c73-7e9f0c8d7ea6</uuid>
	I1006 14:41:07.189329  773552 main.go:141] libmachine: (test-preload-907615) DBG |   <memory unit='KiB'>3145728</memory>
	I1006 14:41:07.189334  773552 main.go:141] libmachine: (test-preload-907615) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I1006 14:41:07.189340  773552 main.go:141] libmachine: (test-preload-907615) DBG |   <vcpu placement='static'>2</vcpu>
	I1006 14:41:07.189344  773552 main.go:141] libmachine: (test-preload-907615) DBG |   <os>
	I1006 14:41:07.189364  773552 main.go:141] libmachine: (test-preload-907615) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1006 14:41:07.189380  773552 main.go:141] libmachine: (test-preload-907615) DBG |     <boot dev='cdrom'/>
	I1006 14:41:07.189390  773552 main.go:141] libmachine: (test-preload-907615) DBG |     <boot dev='hd'/>
	I1006 14:41:07.189398  773552 main.go:141] libmachine: (test-preload-907615) DBG |     <bootmenu enable='no'/>
	I1006 14:41:07.189404  773552 main.go:141] libmachine: (test-preload-907615) DBG |   </os>
	I1006 14:41:07.189411  773552 main.go:141] libmachine: (test-preload-907615) DBG |   <features>
	I1006 14:41:07.189416  773552 main.go:141] libmachine: (test-preload-907615) DBG |     <acpi/>
	I1006 14:41:07.189420  773552 main.go:141] libmachine: (test-preload-907615) DBG |     <apic/>
	I1006 14:41:07.189425  773552 main.go:141] libmachine: (test-preload-907615) DBG |     <pae/>
	I1006 14:41:07.189429  773552 main.go:141] libmachine: (test-preload-907615) DBG |   </features>
	I1006 14:41:07.189434  773552 main.go:141] libmachine: (test-preload-907615) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1006 14:41:07.189439  773552 main.go:141] libmachine: (test-preload-907615) DBG |   <clock offset='utc'/>
	I1006 14:41:07.189477  773552 main.go:141] libmachine: (test-preload-907615) DBG |   <on_poweroff>destroy</on_poweroff>
	I1006 14:41:07.189498  773552 main.go:141] libmachine: (test-preload-907615) DBG |   <on_reboot>restart</on_reboot>
	I1006 14:41:07.189505  773552 main.go:141] libmachine: (test-preload-907615) DBG |   <on_crash>destroy</on_crash>
	I1006 14:41:07.189513  773552 main.go:141] libmachine: (test-preload-907615) DBG |   <devices>
	I1006 14:41:07.189522  773552 main.go:141] libmachine: (test-preload-907615) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1006 14:41:07.189531  773552 main.go:141] libmachine: (test-preload-907615) DBG |     <disk type='file' device='cdrom'>
	I1006 14:41:07.189537  773552 main.go:141] libmachine: (test-preload-907615) DBG |       <driver name='qemu' type='raw'/>
	I1006 14:41:07.189545  773552 main.go:141] libmachine: (test-preload-907615) DBG |       <source file='/home/jenkins/minikube-integration/21701-739942/.minikube/machines/test-preload-907615/boot2docker.iso'/>
	I1006 14:41:07.189551  773552 main.go:141] libmachine: (test-preload-907615) DBG |       <target dev='hdc' bus='scsi'/>
	I1006 14:41:07.189557  773552 main.go:141] libmachine: (test-preload-907615) DBG |       <readonly/>
	I1006 14:41:07.189564  773552 main.go:141] libmachine: (test-preload-907615) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1006 14:41:07.189571  773552 main.go:141] libmachine: (test-preload-907615) DBG |     </disk>
	I1006 14:41:07.189576  773552 main.go:141] libmachine: (test-preload-907615) DBG |     <disk type='file' device='disk'>
	I1006 14:41:07.189582  773552 main.go:141] libmachine: (test-preload-907615) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1006 14:41:07.189615  773552 main.go:141] libmachine: (test-preload-907615) DBG |       <source file='/home/jenkins/minikube-integration/21701-739942/.minikube/machines/test-preload-907615/test-preload-907615.rawdisk'/>
	I1006 14:41:07.189627  773552 main.go:141] libmachine: (test-preload-907615) DBG |       <target dev='hda' bus='virtio'/>
	I1006 14:41:07.189635  773552 main.go:141] libmachine: (test-preload-907615) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1006 14:41:07.189639  773552 main.go:141] libmachine: (test-preload-907615) DBG |     </disk>
	I1006 14:41:07.189645  773552 main.go:141] libmachine: (test-preload-907615) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1006 14:41:07.189660  773552 main.go:141] libmachine: (test-preload-907615) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1006 14:41:07.189668  773552 main.go:141] libmachine: (test-preload-907615) DBG |     </controller>
	I1006 14:41:07.189673  773552 main.go:141] libmachine: (test-preload-907615) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1006 14:41:07.189681  773552 main.go:141] libmachine: (test-preload-907615) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1006 14:41:07.189687  773552 main.go:141] libmachine: (test-preload-907615) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1006 14:41:07.189694  773552 main.go:141] libmachine: (test-preload-907615) DBG |     </controller>
	I1006 14:41:07.189699  773552 main.go:141] libmachine: (test-preload-907615) DBG |     <interface type='network'>
	I1006 14:41:07.189734  773552 main.go:141] libmachine: (test-preload-907615) DBG |       <mac address='52:54:00:2e:28:cf'/>
	I1006 14:41:07.189761  773552 main.go:141] libmachine: (test-preload-907615) DBG |       <source network='mk-test-preload-907615'/>
	I1006 14:41:07.189776  773552 main.go:141] libmachine: (test-preload-907615) DBG |       <model type='virtio'/>
	I1006 14:41:07.189790  773552 main.go:141] libmachine: (test-preload-907615) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1006 14:41:07.189802  773552 main.go:141] libmachine: (test-preload-907615) DBG |     </interface>
	I1006 14:41:07.189813  773552 main.go:141] libmachine: (test-preload-907615) DBG |     <interface type='network'>
	I1006 14:41:07.189823  773552 main.go:141] libmachine: (test-preload-907615) DBG |       <mac address='52:54:00:22:ff:03'/>
	I1006 14:41:07.189835  773552 main.go:141] libmachine: (test-preload-907615) DBG |       <source network='default'/>
	I1006 14:41:07.189851  773552 main.go:141] libmachine: (test-preload-907615) DBG |       <model type='virtio'/>
	I1006 14:41:07.189874  773552 main.go:141] libmachine: (test-preload-907615) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1006 14:41:07.189886  773552 main.go:141] libmachine: (test-preload-907615) DBG |     </interface>
	I1006 14:41:07.189894  773552 main.go:141] libmachine: (test-preload-907615) DBG |     <serial type='pty'>
	I1006 14:41:07.189906  773552 main.go:141] libmachine: (test-preload-907615) DBG |       <target type='isa-serial' port='0'>
	I1006 14:41:07.189924  773552 main.go:141] libmachine: (test-preload-907615) DBG |         <model name='isa-serial'/>
	I1006 14:41:07.189937  773552 main.go:141] libmachine: (test-preload-907615) DBG |       </target>
	I1006 14:41:07.189951  773552 main.go:141] libmachine: (test-preload-907615) DBG |     </serial>
	I1006 14:41:07.189963  773552 main.go:141] libmachine: (test-preload-907615) DBG |     <console type='pty'>
	I1006 14:41:07.189970  773552 main.go:141] libmachine: (test-preload-907615) DBG |       <target type='serial' port='0'/>
	I1006 14:41:07.189977  773552 main.go:141] libmachine: (test-preload-907615) DBG |     </console>
	I1006 14:41:07.189985  773552 main.go:141] libmachine: (test-preload-907615) DBG |     <input type='mouse' bus='ps2'/>
	I1006 14:41:07.189994  773552 main.go:141] libmachine: (test-preload-907615) DBG |     <input type='keyboard' bus='ps2'/>
	I1006 14:41:07.190002  773552 main.go:141] libmachine: (test-preload-907615) DBG |     <audio id='1' type='none'/>
	I1006 14:41:07.190011  773552 main.go:141] libmachine: (test-preload-907615) DBG |     <memballoon model='virtio'>
	I1006 14:41:07.190026  773552 main.go:141] libmachine: (test-preload-907615) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1006 14:41:07.190039  773552 main.go:141] libmachine: (test-preload-907615) DBG |     </memballoon>
	I1006 14:41:07.190049  773552 main.go:141] libmachine: (test-preload-907615) DBG |     <rng model='virtio'>
	I1006 14:41:07.190059  773552 main.go:141] libmachine: (test-preload-907615) DBG |       <backend model='random'>/dev/random</backend>
	I1006 14:41:07.190071  773552 main.go:141] libmachine: (test-preload-907615) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1006 14:41:07.190081  773552 main.go:141] libmachine: (test-preload-907615) DBG |     </rng>
	I1006 14:41:07.190086  773552 main.go:141] libmachine: (test-preload-907615) DBG |   </devices>
	I1006 14:41:07.190104  773552 main.go:141] libmachine: (test-preload-907615) DBG | </domain>
	I1006 14:41:07.190122  773552 main.go:141] libmachine: (test-preload-907615) DBG | 
	I1006 14:41:07.597955  773552 main.go:141] libmachine: (test-preload-907615) waiting for domain to start...
	I1006 14:41:07.599538  773552 main.go:141] libmachine: (test-preload-907615) domain is now running
	I1006 14:41:07.599559  773552 main.go:141] libmachine: (test-preload-907615) waiting for IP...
	I1006 14:41:07.600481  773552 main.go:141] libmachine: (test-preload-907615) DBG | domain test-preload-907615 has defined MAC address 52:54:00:2e:28:cf in network mk-test-preload-907615
	I1006 14:41:07.601140  773552 main.go:141] libmachine: (test-preload-907615) found domain IP: 192.168.39.101
	I1006 14:41:07.601170  773552 main.go:141] libmachine: (test-preload-907615) reserving static IP address...
	I1006 14:41:07.601208  773552 main.go:141] libmachine: (test-preload-907615) DBG | domain test-preload-907615 has current primary IP address 192.168.39.101 and MAC address 52:54:00:2e:28:cf in network mk-test-preload-907615
	I1006 14:41:07.601713  773552 main.go:141] libmachine: (test-preload-907615) DBG | found host DHCP lease matching {name: "test-preload-907615", mac: "52:54:00:2e:28:cf", ip: "192.168.39.101"} in network mk-test-preload-907615: {Iface:virbr1 ExpiryTime:2025-10-06 15:39:55 +0000 UTC Type:0 Mac:52:54:00:2e:28:cf Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:test-preload-907615 Clientid:01:52:54:00:2e:28:cf}
	I1006 14:41:07.601738  773552 main.go:141] libmachine: (test-preload-907615) DBG | skip adding static IP to network mk-test-preload-907615 - found existing host DHCP lease matching {name: "test-preload-907615", mac: "52:54:00:2e:28:cf", ip: "192.168.39.101"}
	I1006 14:41:07.601750  773552 main.go:141] libmachine: (test-preload-907615) reserved static IP address 192.168.39.101 for domain test-preload-907615
	I1006 14:41:07.601762  773552 main.go:141] libmachine: (test-preload-907615) waiting for SSH...
	I1006 14:41:07.601774  773552 main.go:141] libmachine: (test-preload-907615) DBG | Getting to WaitForSSH function...
	I1006 14:41:07.604124  773552 main.go:141] libmachine: (test-preload-907615) DBG | domain test-preload-907615 has defined MAC address 52:54:00:2e:28:cf in network mk-test-preload-907615
	I1006 14:41:07.604525  773552 main.go:141] libmachine: (test-preload-907615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:28:cf", ip: ""} in network mk-test-preload-907615: {Iface:virbr1 ExpiryTime:2025-10-06 15:39:55 +0000 UTC Type:0 Mac:52:54:00:2e:28:cf Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:test-preload-907615 Clientid:01:52:54:00:2e:28:cf}
	I1006 14:41:07.604570  773552 main.go:141] libmachine: (test-preload-907615) DBG | domain test-preload-907615 has defined IP address 192.168.39.101 and MAC address 52:54:00:2e:28:cf in network mk-test-preload-907615
	I1006 14:41:07.604713  773552 main.go:141] libmachine: (test-preload-907615) DBG | Using SSH client type: external
	I1006 14:41:07.604765  773552 main.go:141] libmachine: (test-preload-907615) DBG | Using SSH private key: /home/jenkins/minikube-integration/21701-739942/.minikube/machines/test-preload-907615/id_rsa (-rw-------)
	I1006 14:41:07.604796  773552 main.go:141] libmachine: (test-preload-907615) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.101 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21701-739942/.minikube/machines/test-preload-907615/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1006 14:41:07.604808  773552 main.go:141] libmachine: (test-preload-907615) DBG | About to run SSH command:
	I1006 14:41:07.604817  773552 main.go:141] libmachine: (test-preload-907615) DBG | exit 0
	I1006 14:41:18.900552  773552 main.go:141] libmachine: (test-preload-907615) DBG | SSH cmd err, output: exit status 255: 
	I1006 14:41:18.900597  773552 main.go:141] libmachine: (test-preload-907615) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1006 14:41:18.900610  773552 main.go:141] libmachine: (test-preload-907615) DBG | command : exit 0
	I1006 14:41:18.900617  773552 main.go:141] libmachine: (test-preload-907615) DBG | err     : exit status 255
	I1006 14:41:18.900629  773552 main.go:141] libmachine: (test-preload-907615) DBG | output  : 
	I1006 14:41:21.902758  773552 main.go:141] libmachine: (test-preload-907615) DBG | Getting to WaitForSSH function...
	I1006 14:41:21.906250  773552 main.go:141] libmachine: (test-preload-907615) DBG | domain test-preload-907615 has defined MAC address 52:54:00:2e:28:cf in network mk-test-preload-907615
	I1006 14:41:21.906810  773552 main.go:141] libmachine: (test-preload-907615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:28:cf", ip: ""} in network mk-test-preload-907615: {Iface:virbr1 ExpiryTime:2025-10-06 15:41:18 +0000 UTC Type:0 Mac:52:54:00:2e:28:cf Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:test-preload-907615 Clientid:01:52:54:00:2e:28:cf}
	I1006 14:41:21.906847  773552 main.go:141] libmachine: (test-preload-907615) DBG | domain test-preload-907615 has defined IP address 192.168.39.101 and MAC address 52:54:00:2e:28:cf in network mk-test-preload-907615
	I1006 14:41:21.907029  773552 main.go:141] libmachine: (test-preload-907615) DBG | Using SSH client type: external
	I1006 14:41:21.907091  773552 main.go:141] libmachine: (test-preload-907615) DBG | Using SSH private key: /home/jenkins/minikube-integration/21701-739942/.minikube/machines/test-preload-907615/id_rsa (-rw-------)
	I1006 14:41:21.907130  773552 main.go:141] libmachine: (test-preload-907615) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.101 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21701-739942/.minikube/machines/test-preload-907615/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1006 14:41:21.907143  773552 main.go:141] libmachine: (test-preload-907615) DBG | About to run SSH command:
	I1006 14:41:21.907188  773552 main.go:141] libmachine: (test-preload-907615) DBG | exit 0
	I1006 14:41:22.044841  773552 main.go:141] libmachine: (test-preload-907615) DBG | SSH cmd err, output: <nil>: 
	I1006 14:41:22.045275  773552 main.go:141] libmachine: (test-preload-907615) Calling .GetConfigRaw
	I1006 14:41:22.046056  773552 main.go:141] libmachine: (test-preload-907615) Calling .GetIP
	I1006 14:41:22.049008  773552 main.go:141] libmachine: (test-preload-907615) DBG | domain test-preload-907615 has defined MAC address 52:54:00:2e:28:cf in network mk-test-preload-907615
	I1006 14:41:22.049367  773552 main.go:141] libmachine: (test-preload-907615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:28:cf", ip: ""} in network mk-test-preload-907615: {Iface:virbr1 ExpiryTime:2025-10-06 15:41:18 +0000 UTC Type:0 Mac:52:54:00:2e:28:cf Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:test-preload-907615 Clientid:01:52:54:00:2e:28:cf}
	I1006 14:41:22.049399  773552 main.go:141] libmachine: (test-preload-907615) DBG | domain test-preload-907615 has defined IP address 192.168.39.101 and MAC address 52:54:00:2e:28:cf in network mk-test-preload-907615
	I1006 14:41:22.049724  773552 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/test-preload-907615/config.json ...
	I1006 14:41:22.049942  773552 machine.go:93] provisionDockerMachine start ...
	I1006 14:41:22.049963  773552 main.go:141] libmachine: (test-preload-907615) Calling .DriverName
	I1006 14:41:22.050224  773552 main.go:141] libmachine: (test-preload-907615) Calling .GetSSHHostname
	I1006 14:41:22.053001  773552 main.go:141] libmachine: (test-preload-907615) DBG | domain test-preload-907615 has defined MAC address 52:54:00:2e:28:cf in network mk-test-preload-907615
	I1006 14:41:22.053417  773552 main.go:141] libmachine: (test-preload-907615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:28:cf", ip: ""} in network mk-test-preload-907615: {Iface:virbr1 ExpiryTime:2025-10-06 15:41:18 +0000 UTC Type:0 Mac:52:54:00:2e:28:cf Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:test-preload-907615 Clientid:01:52:54:00:2e:28:cf}
	I1006 14:41:22.053451  773552 main.go:141] libmachine: (test-preload-907615) DBG | domain test-preload-907615 has defined IP address 192.168.39.101 and MAC address 52:54:00:2e:28:cf in network mk-test-preload-907615
	I1006 14:41:22.053649  773552 main.go:141] libmachine: (test-preload-907615) Calling .GetSSHPort
	I1006 14:41:22.053862  773552 main.go:141] libmachine: (test-preload-907615) Calling .GetSSHKeyPath
	I1006 14:41:22.054049  773552 main.go:141] libmachine: (test-preload-907615) Calling .GetSSHKeyPath
	I1006 14:41:22.054208  773552 main.go:141] libmachine: (test-preload-907615) Calling .GetSSHUsername
	I1006 14:41:22.054397  773552 main.go:141] libmachine: Using SSH client type: native
	I1006 14:41:22.054705  773552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I1006 14:41:22.054718  773552 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 14:41:22.174204  773552 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1006 14:41:22.174236  773552 main.go:141] libmachine: (test-preload-907615) Calling .GetMachineName
	I1006 14:41:22.174554  773552 buildroot.go:166] provisioning hostname "test-preload-907615"
	I1006 14:41:22.174597  773552 main.go:141] libmachine: (test-preload-907615) Calling .GetMachineName
	I1006 14:41:22.174839  773552 main.go:141] libmachine: (test-preload-907615) Calling .GetSSHHostname
	I1006 14:41:22.178251  773552 main.go:141] libmachine: (test-preload-907615) DBG | domain test-preload-907615 has defined MAC address 52:54:00:2e:28:cf in network mk-test-preload-907615
	I1006 14:41:22.178784  773552 main.go:141] libmachine: (test-preload-907615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:28:cf", ip: ""} in network mk-test-preload-907615: {Iface:virbr1 ExpiryTime:2025-10-06 15:41:18 +0000 UTC Type:0 Mac:52:54:00:2e:28:cf Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:test-preload-907615 Clientid:01:52:54:00:2e:28:cf}
	I1006 14:41:22.178810  773552 main.go:141] libmachine: (test-preload-907615) DBG | domain test-preload-907615 has defined IP address 192.168.39.101 and MAC address 52:54:00:2e:28:cf in network mk-test-preload-907615
	I1006 14:41:22.179022  773552 main.go:141] libmachine: (test-preload-907615) Calling .GetSSHPort
	I1006 14:41:22.179233  773552 main.go:141] libmachine: (test-preload-907615) Calling .GetSSHKeyPath
	I1006 14:41:22.179423  773552 main.go:141] libmachine: (test-preload-907615) Calling .GetSSHKeyPath
	I1006 14:41:22.179564  773552 main.go:141] libmachine: (test-preload-907615) Calling .GetSSHUsername
	I1006 14:41:22.179745  773552 main.go:141] libmachine: Using SSH client type: native
	I1006 14:41:22.179971  773552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I1006 14:41:22.179986  773552 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-907615 && echo "test-preload-907615" | sudo tee /etc/hostname
	I1006 14:41:22.317635  773552 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-907615
	
	I1006 14:41:22.317681  773552 main.go:141] libmachine: (test-preload-907615) Calling .GetSSHHostname
	I1006 14:41:22.320802  773552 main.go:141] libmachine: (test-preload-907615) DBG | domain test-preload-907615 has defined MAC address 52:54:00:2e:28:cf in network mk-test-preload-907615
	I1006 14:41:22.321147  773552 main.go:141] libmachine: (test-preload-907615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:28:cf", ip: ""} in network mk-test-preload-907615: {Iface:virbr1 ExpiryTime:2025-10-06 15:41:18 +0000 UTC Type:0 Mac:52:54:00:2e:28:cf Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:test-preload-907615 Clientid:01:52:54:00:2e:28:cf}
	I1006 14:41:22.321178  773552 main.go:141] libmachine: (test-preload-907615) DBG | domain test-preload-907615 has defined IP address 192.168.39.101 and MAC address 52:54:00:2e:28:cf in network mk-test-preload-907615
	I1006 14:41:22.321420  773552 main.go:141] libmachine: (test-preload-907615) Calling .GetSSHPort
	I1006 14:41:22.321652  773552 main.go:141] libmachine: (test-preload-907615) Calling .GetSSHKeyPath
	I1006 14:41:22.321798  773552 main.go:141] libmachine: (test-preload-907615) Calling .GetSSHKeyPath
	I1006 14:41:22.321940  773552 main.go:141] libmachine: (test-preload-907615) Calling .GetSSHUsername
	I1006 14:41:22.322111  773552 main.go:141] libmachine: Using SSH client type: native
	I1006 14:41:22.322317  773552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I1006 14:41:22.322335  773552 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-907615' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-907615/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-907615' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 14:41:22.451375  773552 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 14:41:22.451407  773552 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21701-739942/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-739942/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-739942/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-739942/.minikube}
	I1006 14:41:22.451449  773552 buildroot.go:174] setting up certificates
	I1006 14:41:22.451460  773552 provision.go:84] configureAuth start
	I1006 14:41:22.451472  773552 main.go:141] libmachine: (test-preload-907615) Calling .GetMachineName
	I1006 14:41:22.451835  773552 main.go:141] libmachine: (test-preload-907615) Calling .GetIP
	I1006 14:41:22.455501  773552 main.go:141] libmachine: (test-preload-907615) DBG | domain test-preload-907615 has defined MAC address 52:54:00:2e:28:cf in network mk-test-preload-907615
	I1006 14:41:22.455968  773552 main.go:141] libmachine: (test-preload-907615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:28:cf", ip: ""} in network mk-test-preload-907615: {Iface:virbr1 ExpiryTime:2025-10-06 15:41:18 +0000 UTC Type:0 Mac:52:54:00:2e:28:cf Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:test-preload-907615 Clientid:01:52:54:00:2e:28:cf}
	I1006 14:41:22.456010  773552 main.go:141] libmachine: (test-preload-907615) DBG | domain test-preload-907615 has defined IP address 192.168.39.101 and MAC address 52:54:00:2e:28:cf in network mk-test-preload-907615
	I1006 14:41:22.456176  773552 main.go:141] libmachine: (test-preload-907615) Calling .GetSSHHostname
	I1006 14:41:22.458818  773552 main.go:141] libmachine: (test-preload-907615) DBG | domain test-preload-907615 has defined MAC address 52:54:00:2e:28:cf in network mk-test-preload-907615
	I1006 14:41:22.459218  773552 main.go:141] libmachine: (test-preload-907615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:28:cf", ip: ""} in network mk-test-preload-907615: {Iface:virbr1 ExpiryTime:2025-10-06 15:41:18 +0000 UTC Type:0 Mac:52:54:00:2e:28:cf Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:test-preload-907615 Clientid:01:52:54:00:2e:28:cf}
	I1006 14:41:22.459244  773552 main.go:141] libmachine: (test-preload-907615) DBG | domain test-preload-907615 has defined IP address 192.168.39.101 and MAC address 52:54:00:2e:28:cf in network mk-test-preload-907615
	I1006 14:41:22.459438  773552 provision.go:143] copyHostCerts
	I1006 14:41:22.459533  773552 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-739942/.minikube/ca.pem, removing ...
	I1006 14:41:22.459570  773552 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-739942/.minikube/ca.pem
	I1006 14:41:22.459683  773552 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-739942/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-739942/.minikube/ca.pem (1078 bytes)
	I1006 14:41:22.459809  773552 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-739942/.minikube/cert.pem, removing ...
	I1006 14:41:22.459822  773552 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-739942/.minikube/cert.pem
	I1006 14:41:22.459865  773552 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-739942/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-739942/.minikube/cert.pem (1123 bytes)
	I1006 14:41:22.459945  773552 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-739942/.minikube/key.pem, removing ...
	I1006 14:41:22.459956  773552 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-739942/.minikube/key.pem
	I1006 14:41:22.459996  773552 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-739942/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-739942/.minikube/key.pem (1679 bytes)
	I1006 14:41:22.460071  773552 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-739942/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-739942/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-739942/.minikube/certs/ca-key.pem org=jenkins.test-preload-907615 san=[127.0.0.1 192.168.39.101 localhost minikube test-preload-907615]
	I1006 14:41:22.739639  773552 provision.go:177] copyRemoteCerts
	I1006 14:41:22.739718  773552 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 14:41:22.739829  773552 main.go:141] libmachine: (test-preload-907615) Calling .GetSSHHostname
	I1006 14:41:22.743010  773552 main.go:141] libmachine: (test-preload-907615) DBG | domain test-preload-907615 has defined MAC address 52:54:00:2e:28:cf in network mk-test-preload-907615
	I1006 14:41:22.743341  773552 main.go:141] libmachine: (test-preload-907615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:28:cf", ip: ""} in network mk-test-preload-907615: {Iface:virbr1 ExpiryTime:2025-10-06 15:41:18 +0000 UTC Type:0 Mac:52:54:00:2e:28:cf Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:test-preload-907615 Clientid:01:52:54:00:2e:28:cf}
	I1006 14:41:22.743375  773552 main.go:141] libmachine: (test-preload-907615) DBG | domain test-preload-907615 has defined IP address 192.168.39.101 and MAC address 52:54:00:2e:28:cf in network mk-test-preload-907615
	I1006 14:41:22.743532  773552 main.go:141] libmachine: (test-preload-907615) Calling .GetSSHPort
	I1006 14:41:22.743787  773552 main.go:141] libmachine: (test-preload-907615) Calling .GetSSHKeyPath
	I1006 14:41:22.744054  773552 main.go:141] libmachine: (test-preload-907615) Calling .GetSSHUsername
	I1006 14:41:22.744223  773552 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/test-preload-907615/id_rsa Username:docker}
	I1006 14:41:22.835254  773552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1006 14:41:22.867078  773552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1006 14:41:22.898648  773552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1006 14:41:22.931283  773552 provision.go:87] duration metric: took 479.804407ms to configureAuth
	I1006 14:41:22.931320  773552 buildroot.go:189] setting minikube options for container-runtime
	I1006 14:41:22.931525  773552 config.go:182] Loaded profile config "test-preload-907615": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1006 14:41:22.931631  773552 main.go:141] libmachine: (test-preload-907615) Calling .GetSSHHostname
	I1006 14:41:22.934667  773552 main.go:141] libmachine: (test-preload-907615) DBG | domain test-preload-907615 has defined MAC address 52:54:00:2e:28:cf in network mk-test-preload-907615
	I1006 14:41:22.935106  773552 main.go:141] libmachine: (test-preload-907615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:28:cf", ip: ""} in network mk-test-preload-907615: {Iface:virbr1 ExpiryTime:2025-10-06 15:41:18 +0000 UTC Type:0 Mac:52:54:00:2e:28:cf Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:test-preload-907615 Clientid:01:52:54:00:2e:28:cf}
	I1006 14:41:22.935137  773552 main.go:141] libmachine: (test-preload-907615) DBG | domain test-preload-907615 has defined IP address 192.168.39.101 and MAC address 52:54:00:2e:28:cf in network mk-test-preload-907615
	I1006 14:41:22.935385  773552 main.go:141] libmachine: (test-preload-907615) Calling .GetSSHPort
	I1006 14:41:22.935567  773552 main.go:141] libmachine: (test-preload-907615) Calling .GetSSHKeyPath
	I1006 14:41:22.935765  773552 main.go:141] libmachine: (test-preload-907615) Calling .GetSSHKeyPath
	I1006 14:41:22.935959  773552 main.go:141] libmachine: (test-preload-907615) Calling .GetSSHUsername
	I1006 14:41:22.936172  773552 main.go:141] libmachine: Using SSH client type: native
	I1006 14:41:22.936379  773552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I1006 14:41:22.936398  773552 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 14:41:23.205488  773552 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 14:41:23.205523  773552 machine.go:96] duration metric: took 1.15556589s to provisionDockerMachine
	I1006 14:41:23.205536  773552 start.go:293] postStartSetup for "test-preload-907615" (driver="kvm2")
	I1006 14:41:23.205550  773552 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 14:41:23.205572  773552 main.go:141] libmachine: (test-preload-907615) Calling .DriverName
	I1006 14:41:23.205985  773552 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 14:41:23.206048  773552 main.go:141] libmachine: (test-preload-907615) Calling .GetSSHHostname
	I1006 14:41:23.209230  773552 main.go:141] libmachine: (test-preload-907615) DBG | domain test-preload-907615 has defined MAC address 52:54:00:2e:28:cf in network mk-test-preload-907615
	I1006 14:41:23.209634  773552 main.go:141] libmachine: (test-preload-907615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:28:cf", ip: ""} in network mk-test-preload-907615: {Iface:virbr1 ExpiryTime:2025-10-06 15:41:18 +0000 UTC Type:0 Mac:52:54:00:2e:28:cf Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:test-preload-907615 Clientid:01:52:54:00:2e:28:cf}
	I1006 14:41:23.209664  773552 main.go:141] libmachine: (test-preload-907615) DBG | domain test-preload-907615 has defined IP address 192.168.39.101 and MAC address 52:54:00:2e:28:cf in network mk-test-preload-907615
	I1006 14:41:23.209828  773552 main.go:141] libmachine: (test-preload-907615) Calling .GetSSHPort
	I1006 14:41:23.210042  773552 main.go:141] libmachine: (test-preload-907615) Calling .GetSSHKeyPath
	I1006 14:41:23.210259  773552 main.go:141] libmachine: (test-preload-907615) Calling .GetSSHUsername
	I1006 14:41:23.210436  773552 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/test-preload-907615/id_rsa Username:docker}
	I1006 14:41:23.302045  773552 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 14:41:23.307713  773552 info.go:137] Remote host: Buildroot 2025.02
	I1006 14:41:23.307745  773552 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-739942/.minikube/addons for local assets ...
	I1006 14:41:23.307839  773552 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-739942/.minikube/files for local assets ...
	I1006 14:41:23.307924  773552 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-739942/.minikube/files/etc/ssl/certs/7438512.pem -> 7438512.pem in /etc/ssl/certs
	I1006 14:41:23.308064  773552 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1006 14:41:23.321354  773552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/files/etc/ssl/certs/7438512.pem --> /etc/ssl/certs/7438512.pem (1708 bytes)
	I1006 14:41:23.355696  773552 start.go:296] duration metric: took 150.138206ms for postStartSetup
	I1006 14:41:23.355754  773552 fix.go:56] duration metric: took 16.189510513s for fixHost
	I1006 14:41:23.355783  773552 main.go:141] libmachine: (test-preload-907615) Calling .GetSSHHostname
	I1006 14:41:23.358974  773552 main.go:141] libmachine: (test-preload-907615) DBG | domain test-preload-907615 has defined MAC address 52:54:00:2e:28:cf in network mk-test-preload-907615
	I1006 14:41:23.359529  773552 main.go:141] libmachine: (test-preload-907615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:28:cf", ip: ""} in network mk-test-preload-907615: {Iface:virbr1 ExpiryTime:2025-10-06 15:41:18 +0000 UTC Type:0 Mac:52:54:00:2e:28:cf Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:test-preload-907615 Clientid:01:52:54:00:2e:28:cf}
	I1006 14:41:23.359552  773552 main.go:141] libmachine: (test-preload-907615) DBG | domain test-preload-907615 has defined IP address 192.168.39.101 and MAC address 52:54:00:2e:28:cf in network mk-test-preload-907615
	I1006 14:41:23.359851  773552 main.go:141] libmachine: (test-preload-907615) Calling .GetSSHPort
	I1006 14:41:23.360113  773552 main.go:141] libmachine: (test-preload-907615) Calling .GetSSHKeyPath
	I1006 14:41:23.360383  773552 main.go:141] libmachine: (test-preload-907615) Calling .GetSSHKeyPath
	I1006 14:41:23.360551  773552 main.go:141] libmachine: (test-preload-907615) Calling .GetSSHUsername
	I1006 14:41:23.360732  773552 main.go:141] libmachine: Using SSH client type: native
	I1006 14:41:23.360951  773552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I1006 14:41:23.360965  773552 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1006 14:41:23.485036  773552 main.go:141] libmachine: SSH cmd err, output: <nil>: 1759761683.445883658
	
	I1006 14:41:23.485067  773552 fix.go:216] guest clock: 1759761683.445883658
	I1006 14:41:23.485076  773552 fix.go:229] Guest: 2025-10-06 14:41:23.445883658 +0000 UTC Remote: 2025-10-06 14:41:23.355760615 +0000 UTC m=+19.067575687 (delta=90.123043ms)
	I1006 14:41:23.485118  773552 fix.go:200] guest clock delta is within tolerance: 90.123043ms
	I1006 14:41:23.485124  773552 start.go:83] releasing machines lock for "test-preload-907615", held for 16.318893667s
	I1006 14:41:23.485143  773552 main.go:141] libmachine: (test-preload-907615) Calling .DriverName
	I1006 14:41:23.485435  773552 main.go:141] libmachine: (test-preload-907615) Calling .GetIP
	I1006 14:41:23.488408  773552 main.go:141] libmachine: (test-preload-907615) DBG | domain test-preload-907615 has defined MAC address 52:54:00:2e:28:cf in network mk-test-preload-907615
	I1006 14:41:23.488771  773552 main.go:141] libmachine: (test-preload-907615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:28:cf", ip: ""} in network mk-test-preload-907615: {Iface:virbr1 ExpiryTime:2025-10-06 15:41:18 +0000 UTC Type:0 Mac:52:54:00:2e:28:cf Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:test-preload-907615 Clientid:01:52:54:00:2e:28:cf}
	I1006 14:41:23.488799  773552 main.go:141] libmachine: (test-preload-907615) DBG | domain test-preload-907615 has defined IP address 192.168.39.101 and MAC address 52:54:00:2e:28:cf in network mk-test-preload-907615
	I1006 14:41:23.489005  773552 main.go:141] libmachine: (test-preload-907615) Calling .DriverName
	I1006 14:41:23.489506  773552 main.go:141] libmachine: (test-preload-907615) Calling .DriverName
	I1006 14:41:23.489733  773552 main.go:141] libmachine: (test-preload-907615) Calling .DriverName
	I1006 14:41:23.489818  773552 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 14:41:23.489882  773552 main.go:141] libmachine: (test-preload-907615) Calling .GetSSHHostname
	I1006 14:41:23.490026  773552 ssh_runner.go:195] Run: cat /version.json
	I1006 14:41:23.490073  773552 main.go:141] libmachine: (test-preload-907615) Calling .GetSSHHostname
	I1006 14:41:23.493062  773552 main.go:141] libmachine: (test-preload-907615) DBG | domain test-preload-907615 has defined MAC address 52:54:00:2e:28:cf in network mk-test-preload-907615
	I1006 14:41:23.493182  773552 main.go:141] libmachine: (test-preload-907615) DBG | domain test-preload-907615 has defined MAC address 52:54:00:2e:28:cf in network mk-test-preload-907615
	I1006 14:41:23.493517  773552 main.go:141] libmachine: (test-preload-907615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:28:cf", ip: ""} in network mk-test-preload-907615: {Iface:virbr1 ExpiryTime:2025-10-06 15:41:18 +0000 UTC Type:0 Mac:52:54:00:2e:28:cf Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:test-preload-907615 Clientid:01:52:54:00:2e:28:cf}
	I1006 14:41:23.493542  773552 main.go:141] libmachine: (test-preload-907615) DBG | domain test-preload-907615 has defined IP address 192.168.39.101 and MAC address 52:54:00:2e:28:cf in network mk-test-preload-907615
	I1006 14:41:23.493568  773552 main.go:141] libmachine: (test-preload-907615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:28:cf", ip: ""} in network mk-test-preload-907615: {Iface:virbr1 ExpiryTime:2025-10-06 15:41:18 +0000 UTC Type:0 Mac:52:54:00:2e:28:cf Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:test-preload-907615 Clientid:01:52:54:00:2e:28:cf}
	I1006 14:41:23.493598  773552 main.go:141] libmachine: (test-preload-907615) DBG | domain test-preload-907615 has defined IP address 192.168.39.101 and MAC address 52:54:00:2e:28:cf in network mk-test-preload-907615
	I1006 14:41:23.493781  773552 main.go:141] libmachine: (test-preload-907615) Calling .GetSSHPort
	I1006 14:41:23.493943  773552 main.go:141] libmachine: (test-preload-907615) Calling .GetSSHPort
	I1006 14:41:23.494028  773552 main.go:141] libmachine: (test-preload-907615) Calling .GetSSHKeyPath
	I1006 14:41:23.494107  773552 main.go:141] libmachine: (test-preload-907615) Calling .GetSSHKeyPath
	I1006 14:41:23.494209  773552 main.go:141] libmachine: (test-preload-907615) Calling .GetSSHUsername
	I1006 14:41:23.494276  773552 main.go:141] libmachine: (test-preload-907615) Calling .GetSSHUsername
	I1006 14:41:23.494354  773552 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/test-preload-907615/id_rsa Username:docker}
	I1006 14:41:23.494408  773552 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/test-preload-907615/id_rsa Username:docker}
	I1006 14:41:23.579735  773552 ssh_runner.go:195] Run: systemctl --version
	I1006 14:41:23.609974  773552 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 14:41:23.757042  773552 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 14:41:23.764968  773552 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 14:41:23.765037  773552 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 14:41:23.787454  773552 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1006 14:41:23.787485  773552 start.go:495] detecting cgroup driver to use...
	I1006 14:41:23.787567  773552 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 14:41:23.808254  773552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 14:41:23.826739  773552 docker.go:218] disabling cri-docker service (if available) ...
	I1006 14:41:23.826820  773552 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 14:41:23.846860  773552 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 14:41:23.865424  773552 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 14:41:24.009271  773552 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 14:41:24.228876  773552 docker.go:234] disabling docker service ...
	I1006 14:41:24.228961  773552 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 14:41:24.247171  773552 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 14:41:24.263859  773552 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 14:41:24.435523  773552 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 14:41:24.585179  773552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 14:41:24.602263  773552 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 14:41:24.625578  773552 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1006 14:41:24.625659  773552 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:41:24.638774  773552 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1006 14:41:24.638856  773552 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:41:24.651929  773552 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:41:24.665084  773552 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:41:24.679194  773552 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 14:41:24.696062  773552 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:41:24.710139  773552 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:41:24.736763  773552 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:41:24.753121  773552 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 14:41:24.764361  773552 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1006 14:41:24.764436  773552 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1006 14:41:24.786279  773552 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 14:41:24.799271  773552 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:41:24.940305  773552 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 14:41:25.052631  773552 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 14:41:25.052711  773552 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 14:41:25.058997  773552 start.go:563] Will wait 60s for crictl version
	I1006 14:41:25.059089  773552 ssh_runner.go:195] Run: which crictl
	I1006 14:41:25.063790  773552 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1006 14:41:25.113472  773552 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1006 14:41:25.113549  773552 ssh_runner.go:195] Run: crio --version
	I1006 14:41:25.145717  773552 ssh_runner.go:195] Run: crio --version
	I1006 14:41:25.178512  773552 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1006 14:41:25.179949  773552 main.go:141] libmachine: (test-preload-907615) Calling .GetIP
	I1006 14:41:25.183529  773552 main.go:141] libmachine: (test-preload-907615) DBG | domain test-preload-907615 has defined MAC address 52:54:00:2e:28:cf in network mk-test-preload-907615
	I1006 14:41:25.183891  773552 main.go:141] libmachine: (test-preload-907615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:28:cf", ip: ""} in network mk-test-preload-907615: {Iface:virbr1 ExpiryTime:2025-10-06 15:41:18 +0000 UTC Type:0 Mac:52:54:00:2e:28:cf Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:test-preload-907615 Clientid:01:52:54:00:2e:28:cf}
	I1006 14:41:25.183920  773552 main.go:141] libmachine: (test-preload-907615) DBG | domain test-preload-907615 has defined IP address 192.168.39.101 and MAC address 52:54:00:2e:28:cf in network mk-test-preload-907615
	I1006 14:41:25.184168  773552 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1006 14:41:25.189194  773552 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 14:41:25.205958  773552 kubeadm.go:883] updating cluster {Name:test-preload-907615 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.32.0 ClusterName:test-preload-907615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.101 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 14:41:25.206075  773552 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1006 14:41:25.206135  773552 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 14:41:25.249512  773552 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1006 14:41:25.249621  773552 ssh_runner.go:195] Run: which lz4
	I1006 14:41:25.255081  773552 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1006 14:41:25.260540  773552 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1006 14:41:25.260577  773552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I1006 14:41:26.943948  773552 crio.go:462] duration metric: took 1.688921443s to copy over tarball
	I1006 14:41:26.944027  773552 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1006 14:41:28.715177  773552 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.771111723s)
	I1006 14:41:28.715215  773552 crio.go:469] duration metric: took 1.771232966s to extract the tarball
	I1006 14:41:28.715223  773552 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1006 14:41:28.757828  773552 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 14:41:28.809331  773552 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 14:41:28.809367  773552 cache_images.go:85] Images are preloaded, skipping loading
	I1006 14:41:28.809376  773552 kubeadm.go:934] updating node { 192.168.39.101 8443 v1.32.0 crio true true} ...
	I1006 14:41:28.809496  773552 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-907615 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.101
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-907615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 14:41:28.809563  773552 ssh_runner.go:195] Run: crio config
	I1006 14:41:28.858568  773552 cni.go:84] Creating CNI manager for ""
	I1006 14:41:28.858603  773552 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1006 14:41:28.858722  773552 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 14:41:28.858753  773552 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.101 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-907615 NodeName:test-preload-907615 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.101"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.101 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 14:41:28.858904  773552 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.101
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-907615"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.101"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.101"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 14:41:28.858982  773552 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1006 14:41:28.872334  773552 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 14:41:28.872419  773552 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 14:41:28.884259  773552 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1006 14:41:28.905520  773552 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 14:41:28.926695  773552 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1006 14:41:28.949146  773552 ssh_runner.go:195] Run: grep 192.168.39.101	control-plane.minikube.internal$ /etc/hosts
	I1006 14:41:28.953804  773552 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.101	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 14:41:28.969703  773552 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:41:29.118574  773552 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 14:41:29.152075  773552 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/test-preload-907615 for IP: 192.168.39.101
	I1006 14:41:29.152103  773552 certs.go:195] generating shared ca certs ...
	I1006 14:41:29.152121  773552 certs.go:227] acquiring lock for ca certs: {Name:mkac26b60e1fd10143a5d4dc5ca0de64e9dd4f98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:41:29.152281  773552 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-739942/.minikube/ca.key
	I1006 14:41:29.152321  773552 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-739942/.minikube/proxy-client-ca.key
	I1006 14:41:29.152331  773552 certs.go:257] generating profile certs ...
	I1006 14:41:29.152414  773552 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/test-preload-907615/client.key
	I1006 14:41:29.152458  773552 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/test-preload-907615/apiserver.key.4183cdd8
	I1006 14:41:29.152494  773552 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/test-preload-907615/proxy-client.key
	I1006 14:41:29.152611  773552 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-739942/.minikube/certs/743851.pem (1338 bytes)
	W1006 14:41:29.152641  773552 certs.go:480] ignoring /home/jenkins/minikube-integration/21701-739942/.minikube/certs/743851_empty.pem, impossibly tiny 0 bytes
	I1006 14:41:29.152650  773552 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-739942/.minikube/certs/ca-key.pem (1679 bytes)
	I1006 14:41:29.152672  773552 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-739942/.minikube/certs/ca.pem (1078 bytes)
	I1006 14:41:29.152694  773552 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-739942/.minikube/certs/cert.pem (1123 bytes)
	I1006 14:41:29.152714  773552 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-739942/.minikube/certs/key.pem (1679 bytes)
	I1006 14:41:29.152767  773552 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-739942/.minikube/files/etc/ssl/certs/7438512.pem (1708 bytes)
	I1006 14:41:29.153346  773552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 14:41:29.200483  773552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1006 14:41:29.243702  773552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 14:41:29.275845  773552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1006 14:41:29.308658  773552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/test-preload-907615/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1006 14:41:29.340185  773552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/test-preload-907615/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1006 14:41:29.373032  773552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/test-preload-907615/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 14:41:29.405448  773552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/test-preload-907615/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1006 14:41:29.438465  773552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/files/etc/ssl/certs/7438512.pem --> /usr/share/ca-certificates/7438512.pem (1708 bytes)
	I1006 14:41:29.470697  773552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 14:41:29.503152  773552 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/certs/743851.pem --> /usr/share/ca-certificates/743851.pem (1338 bytes)
	I1006 14:41:29.535233  773552 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 14:41:29.557627  773552 ssh_runner.go:195] Run: openssl version
	I1006 14:41:29.564620  773552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 14:41:29.578629  773552 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:41:29.583999  773552 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 13:50 /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:41:29.584066  773552 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:41:29.591771  773552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 14:41:29.605547  773552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/743851.pem && ln -fs /usr/share/ca-certificates/743851.pem /etc/ssl/certs/743851.pem"
	I1006 14:41:29.619564  773552 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/743851.pem
	I1006 14:41:29.625283  773552 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  6 13:59 /usr/share/ca-certificates/743851.pem
	I1006 14:41:29.625345  773552 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/743851.pem
	I1006 14:41:29.633070  773552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/743851.pem /etc/ssl/certs/51391683.0"
	I1006 14:41:29.647453  773552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7438512.pem && ln -fs /usr/share/ca-certificates/7438512.pem /etc/ssl/certs/7438512.pem"
	I1006 14:41:29.662215  773552 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7438512.pem
	I1006 14:41:29.668369  773552 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  6 13:59 /usr/share/ca-certificates/7438512.pem
	I1006 14:41:29.668436  773552 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7438512.pem
	I1006 14:41:29.676455  773552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7438512.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 14:41:29.690999  773552 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 14:41:29.696883  773552 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1006 14:41:29.704923  773552 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1006 14:41:29.712780  773552 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1006 14:41:29.721340  773552 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1006 14:41:29.729670  773552 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1006 14:41:29.737772  773552 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1006 14:41:29.745705  773552 kubeadm.go:400] StartCluster: {Name:test-preload-907615 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
32.0 ClusterName:test-preload-907615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.101 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:41:29.745828  773552 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 14:41:29.745894  773552 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 14:41:29.787795  773552 cri.go:89] found id: ""
	I1006 14:41:29.787890  773552 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 14:41:29.801150  773552 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1006 14:41:29.801180  773552 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1006 14:41:29.801243  773552 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1006 14:41:29.814097  773552 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:41:29.814549  773552 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-907615" does not appear in /home/jenkins/minikube-integration/21701-739942/kubeconfig
	I1006 14:41:29.814703  773552 kubeconfig.go:62] /home/jenkins/minikube-integration/21701-739942/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-907615" cluster setting kubeconfig missing "test-preload-907615" context setting]
	I1006 14:41:29.815010  773552 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-739942/kubeconfig: {Name:mkb3c6455f820b9fd25629981fabc6cb3d63fb4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:41:29.815512  773552 kapi.go:59] client config for test-preload-907615: &rest.Config{Host:"https://192.168.39.101:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21701-739942/.minikube/profiles/test-preload-907615/client.crt", KeyFile:"/home/jenkins/minikube-integration/21701-739942/.minikube/profiles/test-preload-907615/client.key", CAFile:"/home/jenkins/minikube-integration/21701-739942/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1006 14:41:29.815923  773552 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1006 14:41:29.815944  773552 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1006 14:41:29.815949  773552 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1006 14:41:29.815953  773552 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1006 14:41:29.815957  773552 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1006 14:41:29.816333  773552 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1006 14:41:29.828625  773552 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.39.101
	I1006 14:41:29.828668  773552 kubeadm.go:1160] stopping kube-system containers ...
	I1006 14:41:29.828683  773552 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1006 14:41:29.828747  773552 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 14:41:29.885677  773552 cri.go:89] found id: ""
	I1006 14:41:29.885771  773552 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1006 14:41:29.919674  773552 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 14:41:29.932854  773552 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 14:41:29.932901  773552 kubeadm.go:157] found existing configuration files:
	
	I1006 14:41:29.932960  773552 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1006 14:41:29.945453  773552 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 14:41:29.945536  773552 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 14:41:29.958533  773552 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1006 14:41:29.970882  773552 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 14:41:29.970949  773552 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 14:41:29.983758  773552 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1006 14:41:29.996206  773552 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 14:41:29.996285  773552 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 14:41:30.009974  773552 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1006 14:41:30.022411  773552 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 14:41:30.022494  773552 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 14:41:30.035298  773552 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1006 14:41:30.048891  773552 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1006 14:41:30.111339  773552 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1006 14:41:31.338877  773552 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.227493226s)
	I1006 14:41:31.338951  773552 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1006 14:41:31.580726  773552 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1006 14:41:31.650616  773552 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1006 14:41:31.731786  773552 api_server.go:52] waiting for apiserver process to appear ...
	I1006 14:41:31.731871  773552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:41:32.232166  773552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:41:32.732373  773552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:41:33.232407  773552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:41:33.264915  773552 api_server.go:72] duration metric: took 1.533140989s to wait for apiserver process to appear ...
	I1006 14:41:33.264946  773552 api_server.go:88] waiting for apiserver healthz status ...
	I1006 14:41:33.264969  773552 api_server.go:253] Checking apiserver healthz at https://192.168.39.101:8443/healthz ...
	I1006 14:41:35.274016  773552 api_server.go:279] https://192.168.39.101:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1006 14:41:35.274069  773552 api_server.go:103] status: https://192.168.39.101:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1006 14:41:35.274083  773552 api_server.go:253] Checking apiserver healthz at https://192.168.39.101:8443/healthz ...
	I1006 14:41:35.306174  773552 api_server.go:279] https://192.168.39.101:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1006 14:41:35.306208  773552 api_server.go:103] status: https://192.168.39.101:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1006 14:41:35.765653  773552 api_server.go:253] Checking apiserver healthz at https://192.168.39.101:8443/healthz ...
	I1006 14:41:35.779440  773552 api_server.go:279] https://192.168.39.101:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1006 14:41:35.779541  773552 api_server.go:103] status: https://192.168.39.101:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1006 14:41:36.265221  773552 api_server.go:253] Checking apiserver healthz at https://192.168.39.101:8443/healthz ...
	I1006 14:41:36.274055  773552 api_server.go:279] https://192.168.39.101:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1006 14:41:36.274092  773552 api_server.go:103] status: https://192.168.39.101:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1006 14:41:36.765832  773552 api_server.go:253] Checking apiserver healthz at https://192.168.39.101:8443/healthz ...
	I1006 14:41:36.772142  773552 api_server.go:279] https://192.168.39.101:8443/healthz returned 200:
	ok
	I1006 14:41:36.781811  773552 api_server.go:141] control plane version: v1.32.0
	I1006 14:41:36.781870  773552 api_server.go:131] duration metric: took 3.516898817s to wait for apiserver health ...
	I1006 14:41:36.781886  773552 cni.go:84] Creating CNI manager for ""
	I1006 14:41:36.781895  773552 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1006 14:41:36.783398  773552 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1006 14:41:36.784721  773552 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1006 14:41:36.798906  773552 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1006 14:41:36.822657  773552 system_pods.go:43] waiting for kube-system pods to appear ...
	I1006 14:41:36.828087  773552 system_pods.go:59] 7 kube-system pods found
	I1006 14:41:36.828124  773552 system_pods.go:61] "coredns-668d6bf9bc-qk97j" [a3caad5a-3054-4d06-a1dc-3cd0337df5dd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 14:41:36.828132  773552 system_pods.go:61] "etcd-test-preload-907615" [6d516f18-6dd9-4179-945c-0b99a8eeb909] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1006 14:41:36.828140  773552 system_pods.go:61] "kube-apiserver-test-preload-907615" [fae98a7e-e338-4d65-b736-4eb5ded61e12] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1006 14:41:36.828146  773552 system_pods.go:61] "kube-controller-manager-test-preload-907615" [028ee40d-9338-4667-8cb7-f1727c37d72b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1006 14:41:36.828152  773552 system_pods.go:61] "kube-proxy-pvdrb" [aee33994-c241-4c30-b74c-fef0d4607229] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1006 14:41:36.828160  773552 system_pods.go:61] "kube-scheduler-test-preload-907615" [39c9617c-5994-4a5f-969d-9c1e913ffb28] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1006 14:41:36.828168  773552 system_pods.go:61] "storage-provisioner" [132c360c-c1cb-4550-963a-4047e964343e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1006 14:41:36.828176  773552 system_pods.go:74] duration metric: took 5.492536ms to wait for pod list to return data ...
	I1006 14:41:36.828188  773552 node_conditions.go:102] verifying NodePressure condition ...
	I1006 14:41:36.831898  773552 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1006 14:41:36.831925  773552 node_conditions.go:123] node cpu capacity is 2
	I1006 14:41:36.831937  773552 node_conditions.go:105] duration metric: took 3.744284ms to run NodePressure ...
	I1006 14:41:36.831996  773552 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1006 14:41:37.156898  773552 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I1006 14:41:37.160918  773552 kubeadm.go:743] kubelet initialised
	I1006 14:41:37.160942  773552 kubeadm.go:744] duration metric: took 4.017998ms waiting for restarted kubelet to initialise ...
	I1006 14:41:37.160960  773552 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1006 14:41:37.178413  773552 ops.go:34] apiserver oom_adj: -16
	I1006 14:41:37.178442  773552 kubeadm.go:601] duration metric: took 7.377253521s to restartPrimaryControlPlane
	I1006 14:41:37.178452  773552 kubeadm.go:402] duration metric: took 7.432760312s to StartCluster
	I1006 14:41:37.178477  773552 settings.go:142] acquiring lock: {Name:mk95ac14a932277c5d6f71123bdccb175d870212 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:41:37.178550  773552 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21701-739942/kubeconfig
	I1006 14:41:37.179153  773552 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-739942/kubeconfig: {Name:mkb3c6455f820b9fd25629981fabc6cb3d63fb4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:41:37.179408  773552 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.101 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 14:41:37.179641  773552 config.go:182] Loaded profile config "test-preload-907615": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1006 14:41:37.179566  773552 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1006 14:41:37.179715  773552 addons.go:69] Setting storage-provisioner=true in profile "test-preload-907615"
	I1006 14:41:37.179723  773552 addons.go:69] Setting default-storageclass=true in profile "test-preload-907615"
	I1006 14:41:37.179740  773552 addons.go:238] Setting addon storage-provisioner=true in "test-preload-907615"
	I1006 14:41:37.179740  773552 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-907615"
	W1006 14:41:37.179749  773552 addons.go:247] addon storage-provisioner should already be in state true
	I1006 14:41:37.179788  773552 host.go:66] Checking if "test-preload-907615" exists ...
	I1006 14:41:37.180190  773552 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 14:41:37.180190  773552 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 14:41:37.180238  773552 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 14:41:37.180256  773552 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 14:41:37.181258  773552 out.go:179] * Verifying Kubernetes components...
	I1006 14:41:37.182782  773552 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:41:37.195518  773552 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41499
	I1006 14:41:37.196356  773552 main.go:141] libmachine: () Calling .GetVersion
	I1006 14:41:37.197103  773552 main.go:141] libmachine: Using API Version  1
	I1006 14:41:37.197137  773552 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 14:41:37.197577  773552 main.go:141] libmachine: () Calling .GetMachineName
	I1006 14:41:37.198221  773552 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 14:41:37.198280  773552 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 14:41:37.199659  773552 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35827
	I1006 14:41:37.200293  773552 main.go:141] libmachine: () Calling .GetVersion
	I1006 14:41:37.200889  773552 main.go:141] libmachine: Using API Version  1
	I1006 14:41:37.200913  773552 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 14:41:37.201374  773552 main.go:141] libmachine: () Calling .GetMachineName
	I1006 14:41:37.201610  773552 main.go:141] libmachine: (test-preload-907615) Calling .GetState
	I1006 14:41:37.204124  773552 kapi.go:59] client config for test-preload-907615: &rest.Config{Host:"https://192.168.39.101:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21701-739942/.minikube/profiles/test-preload-907615/client.crt", KeyFile:"/home/jenkins/minikube-integration/21701-739942/.minikube/profiles/test-preload-907615/client.key", CAFile:"/home/jenkins/minikube-integration/21701-739942/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1006 14:41:37.204461  773552 addons.go:238] Setting addon default-storageclass=true in "test-preload-907615"
	W1006 14:41:37.204482  773552 addons.go:247] addon default-storageclass should already be in state true
	I1006 14:41:37.204508  773552 host.go:66] Checking if "test-preload-907615" exists ...
	I1006 14:41:37.204842  773552 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 14:41:37.204876  773552 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 14:41:37.213770  773552 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38119
	I1006 14:41:37.214306  773552 main.go:141] libmachine: () Calling .GetVersion
	I1006 14:41:37.214836  773552 main.go:141] libmachine: Using API Version  1
	I1006 14:41:37.214868  773552 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 14:41:37.215329  773552 main.go:141] libmachine: () Calling .GetMachineName
	I1006 14:41:37.215572  773552 main.go:141] libmachine: (test-preload-907615) Calling .GetState
	I1006 14:41:37.217619  773552 main.go:141] libmachine: (test-preload-907615) Calling .DriverName
	I1006 14:41:37.219137  773552 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39209
	I1006 14:41:37.219631  773552 main.go:141] libmachine: () Calling .GetVersion
	I1006 14:41:37.220290  773552 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1006 14:41:37.220421  773552 main.go:141] libmachine: Using API Version  1
	I1006 14:41:37.220457  773552 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 14:41:37.220854  773552 main.go:141] libmachine: () Calling .GetMachineName
	I1006 14:41:37.221317  773552 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 14:41:37.221365  773552 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 14:41:37.221900  773552 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:41:37.221920  773552 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1006 14:41:37.221942  773552 main.go:141] libmachine: (test-preload-907615) Calling .GetSSHHostname
	I1006 14:41:37.226082  773552 main.go:141] libmachine: (test-preload-907615) DBG | domain test-preload-907615 has defined MAC address 52:54:00:2e:28:cf in network mk-test-preload-907615
	I1006 14:41:37.226652  773552 main.go:141] libmachine: (test-preload-907615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:28:cf", ip: ""} in network mk-test-preload-907615: {Iface:virbr1 ExpiryTime:2025-10-06 15:41:18 +0000 UTC Type:0 Mac:52:54:00:2e:28:cf Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:test-preload-907615 Clientid:01:52:54:00:2e:28:cf}
	I1006 14:41:37.226689  773552 main.go:141] libmachine: (test-preload-907615) DBG | domain test-preload-907615 has defined IP address 192.168.39.101 and MAC address 52:54:00:2e:28:cf in network mk-test-preload-907615
	I1006 14:41:37.226901  773552 main.go:141] libmachine: (test-preload-907615) Calling .GetSSHPort
	I1006 14:41:37.227124  773552 main.go:141] libmachine: (test-preload-907615) Calling .GetSSHKeyPath
	I1006 14:41:37.227364  773552 main.go:141] libmachine: (test-preload-907615) Calling .GetSSHUsername
	I1006 14:41:37.227538  773552 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/test-preload-907615/id_rsa Username:docker}
	I1006 14:41:37.237438  773552 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44069
	I1006 14:41:37.237928  773552 main.go:141] libmachine: () Calling .GetVersion
	I1006 14:41:37.238453  773552 main.go:141] libmachine: Using API Version  1
	I1006 14:41:37.238483  773552 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 14:41:37.238887  773552 main.go:141] libmachine: () Calling .GetMachineName
	I1006 14:41:37.239104  773552 main.go:141] libmachine: (test-preload-907615) Calling .GetState
	I1006 14:41:37.241188  773552 main.go:141] libmachine: (test-preload-907615) Calling .DriverName
	I1006 14:41:37.241561  773552 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1006 14:41:37.241577  773552 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1006 14:41:37.241610  773552 main.go:141] libmachine: (test-preload-907615) Calling .GetSSHHostname
	I1006 14:41:37.245110  773552 main.go:141] libmachine: (test-preload-907615) DBG | domain test-preload-907615 has defined MAC address 52:54:00:2e:28:cf in network mk-test-preload-907615
	I1006 14:41:37.245646  773552 main.go:141] libmachine: (test-preload-907615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:28:cf", ip: ""} in network mk-test-preload-907615: {Iface:virbr1 ExpiryTime:2025-10-06 15:41:18 +0000 UTC Type:0 Mac:52:54:00:2e:28:cf Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:test-preload-907615 Clientid:01:52:54:00:2e:28:cf}
	I1006 14:41:37.245683  773552 main.go:141] libmachine: (test-preload-907615) DBG | domain test-preload-907615 has defined IP address 192.168.39.101 and MAC address 52:54:00:2e:28:cf in network mk-test-preload-907615
	I1006 14:41:37.245899  773552 main.go:141] libmachine: (test-preload-907615) Calling .GetSSHPort
	I1006 14:41:37.246151  773552 main.go:141] libmachine: (test-preload-907615) Calling .GetSSHKeyPath
	I1006 14:41:37.246339  773552 main.go:141] libmachine: (test-preload-907615) Calling .GetSSHUsername
	I1006 14:41:37.246497  773552 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/test-preload-907615/id_rsa Username:docker}
	I1006 14:41:37.425043  773552 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 14:41:37.455783  773552 node_ready.go:35] waiting up to 6m0s for node "test-preload-907615" to be "Ready" ...
	I1006 14:41:37.524175  773552 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:41:37.526219  773552 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:41:37.737658  773552 main.go:141] libmachine: Making call to close driver server
	I1006 14:41:37.737689  773552 main.go:141] libmachine: (test-preload-907615) Calling .Close
	I1006 14:41:37.738024  773552 main.go:141] libmachine: (test-preload-907615) DBG | Closing plugin on server side
	I1006 14:41:37.738089  773552 main.go:141] libmachine: Successfully made call to close driver server
	I1006 14:41:37.738116  773552 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 14:41:37.738131  773552 main.go:141] libmachine: Making call to close driver server
	I1006 14:41:37.738143  773552 main.go:141] libmachine: (test-preload-907615) Calling .Close
	I1006 14:41:37.738374  773552 main.go:141] libmachine: Successfully made call to close driver server
	I1006 14:41:37.738388  773552 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 14:41:37.738418  773552 main.go:141] libmachine: (test-preload-907615) DBG | Closing plugin on server side
	I1006 14:41:37.745552  773552 main.go:141] libmachine: Making call to close driver server
	I1006 14:41:37.745569  773552 main.go:141] libmachine: (test-preload-907615) Calling .Close
	I1006 14:41:37.745889  773552 main.go:141] libmachine: Successfully made call to close driver server
	I1006 14:41:37.745923  773552 main.go:141] libmachine: (test-preload-907615) DBG | Closing plugin on server side
	I1006 14:41:37.745938  773552 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 14:41:38.251874  773552 main.go:141] libmachine: Making call to close driver server
	I1006 14:41:38.251905  773552 main.go:141] libmachine: (test-preload-907615) Calling .Close
	I1006 14:41:38.252232  773552 main.go:141] libmachine: Successfully made call to close driver server
	I1006 14:41:38.252261  773552 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 14:41:38.252261  773552 main.go:141] libmachine: (test-preload-907615) DBG | Closing plugin on server side
	I1006 14:41:38.252273  773552 main.go:141] libmachine: Making call to close driver server
	I1006 14:41:38.252282  773552 main.go:141] libmachine: (test-preload-907615) Calling .Close
	I1006 14:41:38.252557  773552 main.go:141] libmachine: Successfully made call to close driver server
	I1006 14:41:38.252577  773552 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 14:41:38.255307  773552 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1006 14:41:38.256519  773552 addons.go:514] duration metric: took 1.076966235s for enable addons: enabled=[default-storageclass storage-provisioner]
	W1006 14:41:39.459396  773552 node_ready.go:57] node "test-preload-907615" has "Ready":"False" status (will retry)
	W1006 14:41:41.459722  773552 node_ready.go:57] node "test-preload-907615" has "Ready":"False" status (will retry)
	W1006 14:41:43.460798  773552 node_ready.go:57] node "test-preload-907615" has "Ready":"False" status (will retry)
	I1006 14:41:45.959835  773552 node_ready.go:49] node "test-preload-907615" is "Ready"
	I1006 14:41:45.959874  773552 node_ready.go:38] duration metric: took 8.504045398s for node "test-preload-907615" to be "Ready" ...
	I1006 14:41:45.959894  773552 api_server.go:52] waiting for apiserver process to appear ...
	I1006 14:41:45.959948  773552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:41:45.982114  773552 api_server.go:72] duration metric: took 8.802672593s to wait for apiserver process to appear ...
	I1006 14:41:45.982143  773552 api_server.go:88] waiting for apiserver healthz status ...
	I1006 14:41:45.982165  773552 api_server.go:253] Checking apiserver healthz at https://192.168.39.101:8443/healthz ...
	I1006 14:41:45.986897  773552 api_server.go:279] https://192.168.39.101:8443/healthz returned 200:
	ok
	I1006 14:41:45.987992  773552 api_server.go:141] control plane version: v1.32.0
	I1006 14:41:45.988019  773552 api_server.go:131] duration metric: took 5.86862ms to wait for apiserver health ...
	I1006 14:41:45.988028  773552 system_pods.go:43] waiting for kube-system pods to appear ...
	I1006 14:41:45.992368  773552 system_pods.go:59] 7 kube-system pods found
	I1006 14:41:45.992402  773552 system_pods.go:61] "coredns-668d6bf9bc-qk97j" [a3caad5a-3054-4d06-a1dc-3cd0337df5dd] Running
	I1006 14:41:45.992410  773552 system_pods.go:61] "etcd-test-preload-907615" [6d516f18-6dd9-4179-945c-0b99a8eeb909] Running
	I1006 14:41:45.992416  773552 system_pods.go:61] "kube-apiserver-test-preload-907615" [fae98a7e-e338-4d65-b736-4eb5ded61e12] Running
	I1006 14:41:45.992422  773552 system_pods.go:61] "kube-controller-manager-test-preload-907615" [028ee40d-9338-4667-8cb7-f1727c37d72b] Running
	I1006 14:41:45.992428  773552 system_pods.go:61] "kube-proxy-pvdrb" [aee33994-c241-4c30-b74c-fef0d4607229] Running
	I1006 14:41:45.992442  773552 system_pods.go:61] "kube-scheduler-test-preload-907615" [39c9617c-5994-4a5f-969d-9c1e913ffb28] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1006 14:41:45.992454  773552 system_pods.go:61] "storage-provisioner" [132c360c-c1cb-4550-963a-4047e964343e] Running
	I1006 14:41:45.992464  773552 system_pods.go:74] duration metric: took 4.4301ms to wait for pod list to return data ...
	I1006 14:41:45.992476  773552 default_sa.go:34] waiting for default service account to be created ...
	I1006 14:41:45.996146  773552 default_sa.go:45] found service account: "default"
	I1006 14:41:45.996177  773552 default_sa.go:55] duration metric: took 3.687822ms for default service account to be created ...
	I1006 14:41:45.996190  773552 system_pods.go:116] waiting for k8s-apps to be running ...
	I1006 14:41:46.000314  773552 system_pods.go:86] 7 kube-system pods found
	I1006 14:41:46.000342  773552 system_pods.go:89] "coredns-668d6bf9bc-qk97j" [a3caad5a-3054-4d06-a1dc-3cd0337df5dd] Running
	I1006 14:41:46.000347  773552 system_pods.go:89] "etcd-test-preload-907615" [6d516f18-6dd9-4179-945c-0b99a8eeb909] Running
	I1006 14:41:46.000351  773552 system_pods.go:89] "kube-apiserver-test-preload-907615" [fae98a7e-e338-4d65-b736-4eb5ded61e12] Running
	I1006 14:41:46.000355  773552 system_pods.go:89] "kube-controller-manager-test-preload-907615" [028ee40d-9338-4667-8cb7-f1727c37d72b] Running
	I1006 14:41:46.000358  773552 system_pods.go:89] "kube-proxy-pvdrb" [aee33994-c241-4c30-b74c-fef0d4607229] Running
	I1006 14:41:46.000372  773552 system_pods.go:89] "kube-scheduler-test-preload-907615" [39c9617c-5994-4a5f-969d-9c1e913ffb28] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1006 14:41:46.000378  773552 system_pods.go:89] "storage-provisioner" [132c360c-c1cb-4550-963a-4047e964343e] Running
	I1006 14:41:46.000388  773552 system_pods.go:126] duration metric: took 4.192112ms to wait for k8s-apps to be running ...
	I1006 14:41:46.000398  773552 system_svc.go:44] waiting for kubelet service to be running ....
	I1006 14:41:46.000442  773552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 14:41:46.019126  773552 system_svc.go:56] duration metric: took 18.715943ms WaitForService to wait for kubelet
	I1006 14:41:46.019157  773552 kubeadm.go:586] duration metric: took 8.839721097s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 14:41:46.019181  773552 node_conditions.go:102] verifying NodePressure condition ...
	I1006 14:41:46.023137  773552 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1006 14:41:46.023167  773552 node_conditions.go:123] node cpu capacity is 2
	I1006 14:41:46.023182  773552 node_conditions.go:105] duration metric: took 3.996385ms to run NodePressure ...
	I1006 14:41:46.023198  773552 start.go:241] waiting for startup goroutines ...
	I1006 14:41:46.023209  773552 start.go:246] waiting for cluster config update ...
	I1006 14:41:46.023224  773552 start.go:255] writing updated cluster config ...
	I1006 14:41:46.023595  773552 ssh_runner.go:195] Run: rm -f paused
	I1006 14:41:46.029665  773552 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1006 14:41:46.030207  773552 kapi.go:59] client config for test-preload-907615: &rest.Config{Host:"https://192.168.39.101:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21701-739942/.minikube/profiles/test-preload-907615/client.crt", KeyFile:"/home/jenkins/minikube-integration/21701-739942/.minikube/profiles/test-preload-907615/client.key", CAFile:"/home/jenkins/minikube-integration/21701-739942/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1006 14:41:46.033343  773552 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-qk97j" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:41:46.038437  773552 pod_ready.go:94] pod "coredns-668d6bf9bc-qk97j" is "Ready"
	I1006 14:41:46.038468  773552 pod_ready.go:86] duration metric: took 5.103874ms for pod "coredns-668d6bf9bc-qk97j" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:41:46.041372  773552 pod_ready.go:83] waiting for pod "etcd-test-preload-907615" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:41:46.045285  773552 pod_ready.go:94] pod "etcd-test-preload-907615" is "Ready"
	I1006 14:41:46.045309  773552 pod_ready.go:86] duration metric: took 3.910748ms for pod "etcd-test-preload-907615" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:41:46.047504  773552 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-907615" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:41:46.051209  773552 pod_ready.go:94] pod "kube-apiserver-test-preload-907615" is "Ready"
	I1006 14:41:46.051233  773552 pod_ready.go:86] duration metric: took 3.706712ms for pod "kube-apiserver-test-preload-907615" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:41:46.053521  773552 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-907615" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:41:46.434474  773552 pod_ready.go:94] pod "kube-controller-manager-test-preload-907615" is "Ready"
	I1006 14:41:46.434501  773552 pod_ready.go:86] duration metric: took 380.958462ms for pod "kube-controller-manager-test-preload-907615" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:41:46.634462  773552 pod_ready.go:83] waiting for pod "kube-proxy-pvdrb" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:41:47.035982  773552 pod_ready.go:94] pod "kube-proxy-pvdrb" is "Ready"
	I1006 14:41:47.036013  773552 pod_ready.go:86] duration metric: took 401.52221ms for pod "kube-proxy-pvdrb" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:41:47.234478  773552 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-907615" in "kube-system" namespace to be "Ready" or be gone ...
	W1006 14:41:49.241524  773552 pod_ready.go:104] pod "kube-scheduler-test-preload-907615" is not "Ready", error: <nil>
	W1006 14:41:51.740565  773552 pod_ready.go:104] pod "kube-scheduler-test-preload-907615" is not "Ready", error: <nil>
	I1006 14:41:53.241100  773552 pod_ready.go:94] pod "kube-scheduler-test-preload-907615" is "Ready"
	I1006 14:41:53.241136  773552 pod_ready.go:86] duration metric: took 6.006623225s for pod "kube-scheduler-test-preload-907615" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:41:53.241152  773552 pod_ready.go:40] duration metric: took 7.211441761s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1006 14:41:53.286258  773552 start.go:623] kubectl: 1.34.1, cluster: 1.32.0 (minor skew: 2)
	I1006 14:41:53.288055  773552 out.go:203] 
	W1006 14:41:53.289478  773552 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.32.0.
	I1006 14:41:53.290816  773552 out.go:179]   - Want kubectl v1.32.0? Try 'minikube kubectl -- get pods -A'
	I1006 14:41:53.293151  773552 out.go:179] * Done! kubectl is now configured to use "test-preload-907615" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 06 14:41:54 test-preload-907615 crio[826]: time="2025-10-06 14:41:54.244707799Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759761714244677000,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f2e45921-6b16-4001-a6e4-d70ae36afb80 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 06 14:41:54 test-preload-907615 crio[826]: time="2025-10-06 14:41:54.245656880Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fdc3e6e4-e5f0-4629-b12e-2867e2fabaa4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 06 14:41:54 test-preload-907615 crio[826]: time="2025-10-06 14:41:54.245787238Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fdc3e6e4-e5f0-4629-b12e-2867e2fabaa4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 06 14:41:54 test-preload-907615 crio[826]: time="2025-10-06 14:41:54.246151315Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:831e2eaa05fb7244eac0df9b4e13ccf72c341136daab5fd9bfff8c525e07072a,PodSandboxId:d68237e5a2ca8e918f624d4cc04a6590e94dadcf7b5eeb93548e131d191dad0c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1759761703606888149,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-qk97j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3caad5a-3054-4d06-a1dc-3cd0337df5dd,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bc914a4cdbdbd66a0fd410677e47cfec9861a542e170e020cb580fb61bdfc7d,PodSandboxId:8a5de607dfc13fb51b1f3ff8f5eaccc4432afd7bdbe32904fc89ed3386f2109b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759761696911357634,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 132c360c-c1cb-4550-963a-4047e964343e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:360a8da685241ad0b4d1fc0fc0dd07bb599a9c7b46933bd43e827e35a6169769,PodSandboxId:70454e613de5e906f9c419d86658b55c2c8f11c4ee7d4ae26116dc37f140c87d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1759761696131056424,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pvdrb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae
e33994-c241-4c30-b74c-fef0d4607229,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea724e86d84ded7e67c86c0a052b7aec4b4e6dcb67189a9d55e981e1cfbdb6c0,PodSandboxId:8a5de607dfc13fb51b1f3ff8f5eaccc4432afd7bdbe32904fc89ed3386f2109b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1759761696138037501,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 132c360c-c1cb-4
550-963a-4047e964343e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4fd7f418819c6035b5da587c5a68b15a8c25bbbfc81ace95f68fb4becbc15c1,PodSandboxId:af9a3bfaebd506e32476e22f6828695b9e150532b1a4326241b506541e220b29,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1759761692792406666,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-907615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e1e27462e391361a079b83e3bd7af83,},Annotations:map[s
tring]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9ff36e8cc82ab77145491dc552cd7b3442715158081fe12c2e52b3392623857,PodSandboxId:5fb8422addb28c913566b07c823eab76c7129bb88879b5167fe164c145aecab1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1759761692836231779,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-907615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfe1afbb6ff2226ded232f0e45becfa4,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6afbb5498fdebabd8e1c03421ca06151a34bdd36de55afe9e630e3cb0e0239c4,PodSandboxId:561abcfe5a6bc19ae5cf64536ce3f129729f14b26e5535e2438eb45fd3f4fda6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1759761692791124118,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-907615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f4ccafc1016fe416e43d8699ebedff,},Annotations:map[string]string{io.kubern
etes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d3fa7416ce56fae288562b4370457b326f61e9e64018df2294c0800f70c419b,PodSandboxId:5a98f393c6bffb25979f185599b8b7ede8ec94809a5eb9200a3611c548360d15,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1759761692709891549,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-907615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9b0fe362a8579b7741aeba6e77b0151,},Annotations:map[string]
string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fdc3e6e4-e5f0-4629-b12e-2867e2fabaa4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 06 14:41:54 test-preload-907615 crio[826]: time="2025-10-06 14:41:54.289170526Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d09ec79f-cc84-421d-9773-6766ca496eef name=/runtime.v1.RuntimeService/Version
	Oct 06 14:41:54 test-preload-907615 crio[826]: time="2025-10-06 14:41:54.289267329Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d09ec79f-cc84-421d-9773-6766ca496eef name=/runtime.v1.RuntimeService/Version
	Oct 06 14:41:54 test-preload-907615 crio[826]: time="2025-10-06 14:41:54.291504220Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b7df6f6e-0fab-4dfe-b537-54d33a4d6efc name=/runtime.v1.ImageService/ImageFsInfo
	Oct 06 14:41:54 test-preload-907615 crio[826]: time="2025-10-06 14:41:54.292023174Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759761714291999644,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b7df6f6e-0fab-4dfe-b537-54d33a4d6efc name=/runtime.v1.ImageService/ImageFsInfo
	Oct 06 14:41:54 test-preload-907615 crio[826]: time="2025-10-06 14:41:54.292736415Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3f5e4fe8-09e1-4e3f-89b0-66d5ae5e9d05 name=/runtime.v1.RuntimeService/ListContainers
	Oct 06 14:41:54 test-preload-907615 crio[826]: time="2025-10-06 14:41:54.292814775Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3f5e4fe8-09e1-4e3f-89b0-66d5ae5e9d05 name=/runtime.v1.RuntimeService/ListContainers
	Oct 06 14:41:54 test-preload-907615 crio[826]: time="2025-10-06 14:41:54.293040054Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:831e2eaa05fb7244eac0df9b4e13ccf72c341136daab5fd9bfff8c525e07072a,PodSandboxId:d68237e5a2ca8e918f624d4cc04a6590e94dadcf7b5eeb93548e131d191dad0c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1759761703606888149,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-qk97j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3caad5a-3054-4d06-a1dc-3cd0337df5dd,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bc914a4cdbdbd66a0fd410677e47cfec9861a542e170e020cb580fb61bdfc7d,PodSandboxId:8a5de607dfc13fb51b1f3ff8f5eaccc4432afd7bdbe32904fc89ed3386f2109b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759761696911357634,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 132c360c-c1cb-4550-963a-4047e964343e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:360a8da685241ad0b4d1fc0fc0dd07bb599a9c7b46933bd43e827e35a6169769,PodSandboxId:70454e613de5e906f9c419d86658b55c2c8f11c4ee7d4ae26116dc37f140c87d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1759761696131056424,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pvdrb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae
e33994-c241-4c30-b74c-fef0d4607229,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea724e86d84ded7e67c86c0a052b7aec4b4e6dcb67189a9d55e981e1cfbdb6c0,PodSandboxId:8a5de607dfc13fb51b1f3ff8f5eaccc4432afd7bdbe32904fc89ed3386f2109b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1759761696138037501,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 132c360c-c1cb-4
550-963a-4047e964343e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4fd7f418819c6035b5da587c5a68b15a8c25bbbfc81ace95f68fb4becbc15c1,PodSandboxId:af9a3bfaebd506e32476e22f6828695b9e150532b1a4326241b506541e220b29,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1759761692792406666,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-907615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e1e27462e391361a079b83e3bd7af83,},Annotations:map[s
tring]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9ff36e8cc82ab77145491dc552cd7b3442715158081fe12c2e52b3392623857,PodSandboxId:5fb8422addb28c913566b07c823eab76c7129bb88879b5167fe164c145aecab1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1759761692836231779,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-907615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfe1afbb6ff2226ded232f0e45becfa4,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6afbb5498fdebabd8e1c03421ca06151a34bdd36de55afe9e630e3cb0e0239c4,PodSandboxId:561abcfe5a6bc19ae5cf64536ce3f129729f14b26e5535e2438eb45fd3f4fda6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1759761692791124118,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-907615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f4ccafc1016fe416e43d8699ebedff,},Annotations:map[string]string{io.kubern
etes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d3fa7416ce56fae288562b4370457b326f61e9e64018df2294c0800f70c419b,PodSandboxId:5a98f393c6bffb25979f185599b8b7ede8ec94809a5eb9200a3611c548360d15,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1759761692709891549,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-907615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9b0fe362a8579b7741aeba6e77b0151,},Annotations:map[string]
string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3f5e4fe8-09e1-4e3f-89b0-66d5ae5e9d05 name=/runtime.v1.RuntimeService/ListContainers
	Oct 06 14:41:54 test-preload-907615 crio[826]: time="2025-10-06 14:41:54.335354678Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a1c4e232-0a20-458e-9a63-dc5d2bcdaf8d name=/runtime.v1.RuntimeService/Version
	Oct 06 14:41:54 test-preload-907615 crio[826]: time="2025-10-06 14:41:54.335449474Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a1c4e232-0a20-458e-9a63-dc5d2bcdaf8d name=/runtime.v1.RuntimeService/Version
	Oct 06 14:41:54 test-preload-907615 crio[826]: time="2025-10-06 14:41:54.336775392Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=135975dc-47ff-4549-9343-f5738b6288d3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 06 14:41:54 test-preload-907615 crio[826]: time="2025-10-06 14:41:54.337226024Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759761714337201358,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=135975dc-47ff-4549-9343-f5738b6288d3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 06 14:41:54 test-preload-907615 crio[826]: time="2025-10-06 14:41:54.337741850Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ba5637bd-5f49-44d2-ba99-5451385429b6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 06 14:41:54 test-preload-907615 crio[826]: time="2025-10-06 14:41:54.337798674Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ba5637bd-5f49-44d2-ba99-5451385429b6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 06 14:41:54 test-preload-907615 crio[826]: time="2025-10-06 14:41:54.338001789Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:831e2eaa05fb7244eac0df9b4e13ccf72c341136daab5fd9bfff8c525e07072a,PodSandboxId:d68237e5a2ca8e918f624d4cc04a6590e94dadcf7b5eeb93548e131d191dad0c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1759761703606888149,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-qk97j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3caad5a-3054-4d06-a1dc-3cd0337df5dd,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bc914a4cdbdbd66a0fd410677e47cfec9861a542e170e020cb580fb61bdfc7d,PodSandboxId:8a5de607dfc13fb51b1f3ff8f5eaccc4432afd7bdbe32904fc89ed3386f2109b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759761696911357634,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 132c360c-c1cb-4550-963a-4047e964343e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:360a8da685241ad0b4d1fc0fc0dd07bb599a9c7b46933bd43e827e35a6169769,PodSandboxId:70454e613de5e906f9c419d86658b55c2c8f11c4ee7d4ae26116dc37f140c87d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1759761696131056424,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pvdrb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae
e33994-c241-4c30-b74c-fef0d4607229,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea724e86d84ded7e67c86c0a052b7aec4b4e6dcb67189a9d55e981e1cfbdb6c0,PodSandboxId:8a5de607dfc13fb51b1f3ff8f5eaccc4432afd7bdbe32904fc89ed3386f2109b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1759761696138037501,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 132c360c-c1cb-4
550-963a-4047e964343e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4fd7f418819c6035b5da587c5a68b15a8c25bbbfc81ace95f68fb4becbc15c1,PodSandboxId:af9a3bfaebd506e32476e22f6828695b9e150532b1a4326241b506541e220b29,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1759761692792406666,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-907615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e1e27462e391361a079b83e3bd7af83,},Annotations:map[s
tring]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9ff36e8cc82ab77145491dc552cd7b3442715158081fe12c2e52b3392623857,PodSandboxId:5fb8422addb28c913566b07c823eab76c7129bb88879b5167fe164c145aecab1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1759761692836231779,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-907615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfe1afbb6ff2226ded232f0e45becfa4,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6afbb5498fdebabd8e1c03421ca06151a34bdd36de55afe9e630e3cb0e0239c4,PodSandboxId:561abcfe5a6bc19ae5cf64536ce3f129729f14b26e5535e2438eb45fd3f4fda6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1759761692791124118,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-907615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f4ccafc1016fe416e43d8699ebedff,},Annotations:map[string]string{io.kubern
etes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d3fa7416ce56fae288562b4370457b326f61e9e64018df2294c0800f70c419b,PodSandboxId:5a98f393c6bffb25979f185599b8b7ede8ec94809a5eb9200a3611c548360d15,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1759761692709891549,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-907615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9b0fe362a8579b7741aeba6e77b0151,},Annotations:map[string]
string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ba5637bd-5f49-44d2-ba99-5451385429b6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 06 14:41:54 test-preload-907615 crio[826]: time="2025-10-06 14:41:54.373762711Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f9826f71-98f3-4e7a-b286-bd0ff7fdc37a name=/runtime.v1.RuntimeService/Version
	Oct 06 14:41:54 test-preload-907615 crio[826]: time="2025-10-06 14:41:54.373829450Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f9826f71-98f3-4e7a-b286-bd0ff7fdc37a name=/runtime.v1.RuntimeService/Version
	Oct 06 14:41:54 test-preload-907615 crio[826]: time="2025-10-06 14:41:54.376027823Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=56f6cf60-f048-49a9-876c-7c22b81d33af name=/runtime.v1.ImageService/ImageFsInfo
	Oct 06 14:41:54 test-preload-907615 crio[826]: time="2025-10-06 14:41:54.376470939Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759761714376450155,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=56f6cf60-f048-49a9-876c-7c22b81d33af name=/runtime.v1.ImageService/ImageFsInfo
	Oct 06 14:41:54 test-preload-907615 crio[826]: time="2025-10-06 14:41:54.377079206Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=61e5aa9b-2393-4820-8204-d075e37070b2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 06 14:41:54 test-preload-907615 crio[826]: time="2025-10-06 14:41:54.377181240Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=61e5aa9b-2393-4820-8204-d075e37070b2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 06 14:41:54 test-preload-907615 crio[826]: time="2025-10-06 14:41:54.377366871Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:831e2eaa05fb7244eac0df9b4e13ccf72c341136daab5fd9bfff8c525e07072a,PodSandboxId:d68237e5a2ca8e918f624d4cc04a6590e94dadcf7b5eeb93548e131d191dad0c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1759761703606888149,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-qk97j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3caad5a-3054-4d06-a1dc-3cd0337df5dd,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bc914a4cdbdbd66a0fd410677e47cfec9861a542e170e020cb580fb61bdfc7d,PodSandboxId:8a5de607dfc13fb51b1f3ff8f5eaccc4432afd7bdbe32904fc89ed3386f2109b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759761696911357634,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 132c360c-c1cb-4550-963a-4047e964343e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:360a8da685241ad0b4d1fc0fc0dd07bb599a9c7b46933bd43e827e35a6169769,PodSandboxId:70454e613de5e906f9c419d86658b55c2c8f11c4ee7d4ae26116dc37f140c87d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1759761696131056424,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pvdrb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae
e33994-c241-4c30-b74c-fef0d4607229,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea724e86d84ded7e67c86c0a052b7aec4b4e6dcb67189a9d55e981e1cfbdb6c0,PodSandboxId:8a5de607dfc13fb51b1f3ff8f5eaccc4432afd7bdbe32904fc89ed3386f2109b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1759761696138037501,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 132c360c-c1cb-4
550-963a-4047e964343e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4fd7f418819c6035b5da587c5a68b15a8c25bbbfc81ace95f68fb4becbc15c1,PodSandboxId:af9a3bfaebd506e32476e22f6828695b9e150532b1a4326241b506541e220b29,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1759761692792406666,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-907615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e1e27462e391361a079b83e3bd7af83,},Annotations:map[s
tring]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9ff36e8cc82ab77145491dc552cd7b3442715158081fe12c2e52b3392623857,PodSandboxId:5fb8422addb28c913566b07c823eab76c7129bb88879b5167fe164c145aecab1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1759761692836231779,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-907615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfe1afbb6ff2226ded232f0e45becfa4,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6afbb5498fdebabd8e1c03421ca06151a34bdd36de55afe9e630e3cb0e0239c4,PodSandboxId:561abcfe5a6bc19ae5cf64536ce3f129729f14b26e5535e2438eb45fd3f4fda6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1759761692791124118,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-907615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f4ccafc1016fe416e43d8699ebedff,},Annotations:map[string]string{io.kubern
etes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d3fa7416ce56fae288562b4370457b326f61e9e64018df2294c0800f70c419b,PodSandboxId:5a98f393c6bffb25979f185599b8b7ede8ec94809a5eb9200a3611c548360d15,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1759761692709891549,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-907615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9b0fe362a8579b7741aeba6e77b0151,},Annotations:map[string]
string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=61e5aa9b-2393-4820-8204-d075e37070b2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	831e2eaa05fb7       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   10 seconds ago      Running             coredns                   1                   d68237e5a2ca8       coredns-668d6bf9bc-qk97j
	5bc914a4cdbdb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   17 seconds ago      Running             storage-provisioner       3                   8a5de607dfc13       storage-provisioner
	ea724e86d84de       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   18 seconds ago      Exited              storage-provisioner       2                   8a5de607dfc13       storage-provisioner
	360a8da685241       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   18 seconds ago      Running             kube-proxy                1                   70454e613de5e       kube-proxy-pvdrb
	f9ff36e8cc82a       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   21 seconds ago      Running             kube-scheduler            1                   5fb8422addb28       kube-scheduler-test-preload-907615
	e4fd7f418819c       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   21 seconds ago      Running             etcd                      1                   af9a3bfaebd50       etcd-test-preload-907615
	6afbb5498fdeb       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   21 seconds ago      Running             kube-apiserver            1                   561abcfe5a6bc       kube-apiserver-test-preload-907615
	5d3fa7416ce56       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   21 seconds ago      Running             kube-controller-manager   1                   5a98f393c6bff       kube-controller-manager-test-preload-907615
	
	
	==> coredns [831e2eaa05fb7244eac0df9b4e13ccf72c341136daab5fd9bfff8c525e07072a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:45935 - 53650 "HINFO IN 6373335514800264831.6992430116323848013. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.024666327s
	
	
	==> describe nodes <==
	Name:               test-preload-907615
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-907615
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=512f8c40caa35ce8d82a76bc06907e2d11c89c81
	                    minikube.k8s.io/name=test-preload-907615
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_06T14_40_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 06 Oct 2025 14:40:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-907615
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 06 Oct 2025 14:41:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 06 Oct 2025 14:41:45 +0000   Mon, 06 Oct 2025 14:40:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 06 Oct 2025 14:41:45 +0000   Mon, 06 Oct 2025 14:40:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 06 Oct 2025 14:41:45 +0000   Mon, 06 Oct 2025 14:40:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 06 Oct 2025 14:41:45 +0000   Mon, 06 Oct 2025 14:41:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.101
	  Hostname:    test-preload-907615
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042708Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042708Ki
	  pods:               110
	System Info:
	  Machine ID:                 9671fd2ee0eb461e9c737e9f0c8d7ea6
	  System UUID:                9671fd2e-e0eb-461e-9c73-7e9f0c8d7ea6
	  Boot ID:                    25439327-b446-43c4-ac83-bc4dcb204f93
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-qk97j                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     75s
	  kube-system                 etcd-test-preload-907615                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         81s
	  kube-system                 kube-apiserver-test-preload-907615             250m (12%)    0 (0%)      0 (0%)           0 (0%)         80s
	  kube-system                 kube-controller-manager-test-preload-907615    200m (10%)    0 (0%)      0 (0%)           0 (0%)         80s
	  kube-system                 kube-proxy-pvdrb                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         75s
	  kube-system                 kube-scheduler-test-preload-907615             100m (5%)     0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         75s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 74s                kube-proxy       
	  Normal   Starting                 18s                kube-proxy       
	  Normal   NodeHasSufficientMemory  80s                kubelet          Node test-preload-907615 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  80s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    80s                kubelet          Node test-preload-907615 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     80s                kubelet          Node test-preload-907615 status is now: NodeHasSufficientPID
	  Normal   Starting                 80s                kubelet          Starting kubelet.
	  Normal   NodeReady                79s                kubelet          Node test-preload-907615 status is now: NodeReady
	  Normal   RegisteredNode           76s                node-controller  Node test-preload-907615 event: Registered Node test-preload-907615 in Controller
	  Normal   Starting                 23s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  23s (x8 over 23s)  kubelet          Node test-preload-907615 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    23s (x8 over 23s)  kubelet          Node test-preload-907615 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     23s (x7 over 23s)  kubelet          Node test-preload-907615 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  23s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 19s                kubelet          Node test-preload-907615 has been rebooted, boot id: 25439327-b446-43c4-ac83-bc4dcb204f93
	  Normal   RegisteredNode           16s                node-controller  Node test-preload-907615 event: Registered Node test-preload-907615 in Controller
	
	
	==> dmesg <==
	[Oct 6 14:41] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001690] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.007127] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.987159] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.085150] kauditd_printk_skb: 4 callbacks suppressed
	[  +0.112459] kauditd_printk_skb: 94 callbacks suppressed
	[  +4.487400] kauditd_printk_skb: 185 callbacks suppressed
	[  +0.000065] kauditd_printk_skb: 143 callbacks suppressed
	[  +6.536243] kauditd_printk_skb: 65 callbacks suppressed
	
	
	==> etcd [e4fd7f418819c6035b5da587c5a68b15a8c25bbbfc81ace95f68fb4becbc15c1] <==
	{"level":"info","ts":"2025-10-06T14:41:33.380021Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-06T14:41:33.380312Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"65e271b8f7cb8d0f","initial-advertise-peer-urls":["https://192.168.39.101:2380"],"listen-peer-urls":["https://192.168.39.101:2380"],"advertise-client-urls":["https://192.168.39.101:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.101:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-06T14:41:33.380362Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-06T14:41:33.372192Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-06T14:41:33.390602Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-06T14:41:33.390692Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-06T14:41:33.381158Z","caller":"etcdserver/server.go:757","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"65e271b8f7cb8d0f","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2025-10-06T14:41:33.381192Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.101:2380"}
	{"level":"info","ts":"2025-10-06T14:41:33.390857Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.101:2380"}
	{"level":"info","ts":"2025-10-06T14:41:33.409061Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"65e271b8f7cb8d0f is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-06T14:41:33.409154Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"65e271b8f7cb8d0f became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-06T14:41:33.409181Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"65e271b8f7cb8d0f received MsgPreVoteResp from 65e271b8f7cb8d0f at term 2"}
	{"level":"info","ts":"2025-10-06T14:41:33.409202Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"65e271b8f7cb8d0f became candidate at term 3"}
	{"level":"info","ts":"2025-10-06T14:41:33.409218Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"65e271b8f7cb8d0f received MsgVoteResp from 65e271b8f7cb8d0f at term 3"}
	{"level":"info","ts":"2025-10-06T14:41:33.409237Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"65e271b8f7cb8d0f became leader at term 3"}
	{"level":"info","ts":"2025-10-06T14:41:33.409254Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 65e271b8f7cb8d0f elected leader 65e271b8f7cb8d0f at term 3"}
	{"level":"info","ts":"2025-10-06T14:41:33.410685Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"65e271b8f7cb8d0f","local-member-attributes":"{Name:test-preload-907615 ClientURLs:[https://192.168.39.101:2379]}","request-path":"/0/members/65e271b8f7cb8d0f/attributes","cluster-id":"24cb6133d13a326a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-06T14:41:33.410698Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-06T14:41:33.410715Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-06T14:41:33.421612Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-06T14:41:33.421668Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-06T14:41:33.421771Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-06T14:41:33.422250Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-06T14:41:33.422425Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.101:2379"}
	{"level":"info","ts":"2025-10-06T14:41:33.423896Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 14:41:54 up 0 min,  0 users,  load average: 1.37, 0.40, 0.14
	Linux test-preload-907615 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [6afbb5498fdebabd8e1c03421ca06151a34bdd36de55afe9e630e3cb0e0239c4] <==
	I1006 14:41:35.331010       1 aggregator.go:171] initial CRD sync complete...
	I1006 14:41:35.331040       1 autoregister_controller.go:144] Starting autoregister controller
	I1006 14:41:35.331047       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1006 14:41:35.331053       1 cache.go:39] Caches are synced for autoregister controller
	I1006 14:41:35.331292       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1006 14:41:35.338028       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1006 14:41:35.338047       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1006 14:41:35.338135       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1006 14:41:35.338409       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1006 14:41:35.338498       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1006 14:41:35.338034       1 cache.go:39] Caches are synced for RemoteAvailability controller
	E1006 14:41:35.349794       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1006 14:41:35.375987       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1006 14:41:35.403511       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1006 14:41:35.403675       1 policy_source.go:240] refreshing policies
	I1006 14:41:35.487717       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1006 14:41:35.793563       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1006 14:41:36.251406       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1006 14:41:37.019729       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1006 14:41:37.083013       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1006 14:41:37.115981       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1006 14:41:37.125039       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1006 14:41:38.600234       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1006 14:41:38.891807       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1006 14:41:38.942418       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [5d3fa7416ce56fae288562b4370457b326f61e9e64018df2294c0800f70c419b] <==
	I1006 14:41:38.549062       1 shared_informer.go:320] Caches are synced for resource quota
	I1006 14:41:38.550899       1 shared_informer.go:320] Caches are synced for namespace
	I1006 14:41:38.557400       1 shared_informer.go:320] Caches are synced for garbage collector
	I1006 14:41:38.557421       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1006 14:41:38.557427       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1006 14:41:38.561789       1 shared_informer.go:320] Caches are synced for endpoint
	I1006 14:41:38.566927       1 shared_informer.go:320] Caches are synced for garbage collector
	I1006 14:41:38.567959       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I1006 14:41:38.573003       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-907615"
	I1006 14:41:38.573925       1 shared_informer.go:320] Caches are synced for daemon sets
	I1006 14:41:38.575808       1 shared_informer.go:320] Caches are synced for ReplicationController
	I1006 14:41:38.576644       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I1006 14:41:38.580363       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I1006 14:41:38.582254       1 shared_informer.go:320] Caches are synced for service account
	I1006 14:41:38.585668       1 shared_informer.go:320] Caches are synced for deployment
	I1006 14:41:38.612030       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="120.057796ms"
	I1006 14:41:38.613421       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="121.099µs"
	I1006 14:41:38.764915       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-907615"
	I1006 14:41:43.945574       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="92.805µs"
	I1006 14:41:44.008489       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="28.701157ms"
	I1006 14:41:44.008637       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="55.893µs"
	I1006 14:41:45.605701       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-907615"
	I1006 14:41:45.621248       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-907615"
	I1006 14:41:48.783754       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-907615"
	I1006 14:41:48.783756       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [360a8da685241ad0b4d1fc0fc0dd07bb599a9c7b46933bd43e827e35a6169769] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1006 14:41:36.476089       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1006 14:41:36.496875       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.101"]
	E1006 14:41:36.496994       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1006 14:41:36.555789       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I1006 14:41:36.555909       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1006 14:41:36.556019       1 server_linux.go:170] "Using iptables Proxier"
	I1006 14:41:36.561509       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1006 14:41:36.562508       1 server.go:497] "Version info" version="v1.32.0"
	I1006 14:41:36.562698       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 14:41:36.570850       1 config.go:199] "Starting service config controller"
	I1006 14:41:36.571713       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1006 14:41:36.571188       1 config.go:105] "Starting endpoint slice config controller"
	I1006 14:41:36.571797       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1006 14:41:36.572023       1 config.go:329] "Starting node config controller"
	I1006 14:41:36.572051       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1006 14:41:36.672390       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1006 14:41:36.672432       1 shared_informer.go:320] Caches are synced for node config
	I1006 14:41:36.672452       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [f9ff36e8cc82ab77145491dc552cd7b3442715158081fe12c2e52b3392623857] <==
	I1006 14:41:34.762020       1 serving.go:386] Generated self-signed cert in-memory
	W1006 14:41:35.290381       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1006 14:41:35.290474       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1006 14:41:35.290498       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1006 14:41:35.290578       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1006 14:41:35.320981       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
	I1006 14:41:35.321021       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 14:41:35.324811       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1006 14:41:35.324907       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1006 14:41:35.324919       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1006 14:41:35.325057       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1006 14:41:35.425163       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 06 14:41:35 test-preload-907615 kubelet[1154]: I1006 14:41:35.784057    1154 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aee33994-c241-4c30-b74c-fef0d4607229-xtables-lock\") pod \"kube-proxy-pvdrb\" (UID: \"aee33994-c241-4c30-b74c-fef0d4607229\") " pod="kube-system/kube-proxy-pvdrb"
	Oct 06 14:41:35 test-preload-907615 kubelet[1154]: I1006 14:41:35.784103    1154 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/132c360c-c1cb-4550-963a-4047e964343e-tmp\") pod \"storage-provisioner\" (UID: \"132c360c-c1cb-4550-963a-4047e964343e\") " pod="kube-system/storage-provisioner"
	Oct 06 14:41:35 test-preload-907615 kubelet[1154]: E1006 14:41:35.784209    1154 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 06 14:41:35 test-preload-907615 kubelet[1154]: E1006 14:41:35.784269    1154 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3caad5a-3054-4d06-a1dc-3cd0337df5dd-config-volume podName:a3caad5a-3054-4d06-a1dc-3cd0337df5dd nodeName:}" failed. No retries permitted until 2025-10-06 14:41:36.284250399 +0000 UTC m=+4.709848382 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a3caad5a-3054-4d06-a1dc-3cd0337df5dd-config-volume") pod "coredns-668d6bf9bc-qk97j" (UID: "a3caad5a-3054-4d06-a1dc-3cd0337df5dd") : object "kube-system"/"coredns" not registered
	Oct 06 14:41:35 test-preload-907615 kubelet[1154]: I1006 14:41:35.863243    1154 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-test-preload-907615"
	Oct 06 14:41:35 test-preload-907615 kubelet[1154]: I1006 14:41:35.863376    1154 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-test-preload-907615"
	Oct 06 14:41:35 test-preload-907615 kubelet[1154]: I1006 14:41:35.864475    1154 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-test-preload-907615"
	Oct 06 14:41:35 test-preload-907615 kubelet[1154]: E1006 14:41:35.887345    1154 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-test-preload-907615\" already exists" pod="kube-system/etcd-test-preload-907615"
	Oct 06 14:41:35 test-preload-907615 kubelet[1154]: E1006 14:41:35.887667    1154 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-test-preload-907615\" already exists" pod="kube-system/kube-apiserver-test-preload-907615"
	Oct 06 14:41:35 test-preload-907615 kubelet[1154]: E1006 14:41:35.889377    1154 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-test-preload-907615\" already exists" pod="kube-system/kube-controller-manager-test-preload-907615"
	Oct 06 14:41:36 test-preload-907615 kubelet[1154]: E1006 14:41:36.287219    1154 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 06 14:41:36 test-preload-907615 kubelet[1154]: E1006 14:41:36.287295    1154 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3caad5a-3054-4d06-a1dc-3cd0337df5dd-config-volume podName:a3caad5a-3054-4d06-a1dc-3cd0337df5dd nodeName:}" failed. No retries permitted until 2025-10-06 14:41:37.287281601 +0000 UTC m=+5.712879584 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a3caad5a-3054-4d06-a1dc-3cd0337df5dd-config-volume") pod "coredns-668d6bf9bc-qk97j" (UID: "a3caad5a-3054-4d06-a1dc-3cd0337df5dd") : object "kube-system"/"coredns" not registered
	Oct 06 14:41:36 test-preload-907615 kubelet[1154]: E1006 14:41:36.752647    1154 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-qk97j" podUID="a3caad5a-3054-4d06-a1dc-3cd0337df5dd"
	Oct 06 14:41:36 test-preload-907615 kubelet[1154]: E1006 14:41:36.767897    1154 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Oct 06 14:41:36 test-preload-907615 kubelet[1154]: I1006 14:41:36.889491    1154 scope.go:117] "RemoveContainer" containerID="ea724e86d84ded7e67c86c0a052b7aec4b4e6dcb67189a9d55e981e1cfbdb6c0"
	Oct 06 14:41:37 test-preload-907615 kubelet[1154]: E1006 14:41:37.296100    1154 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 06 14:41:37 test-preload-907615 kubelet[1154]: E1006 14:41:37.296185    1154 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3caad5a-3054-4d06-a1dc-3cd0337df5dd-config-volume podName:a3caad5a-3054-4d06-a1dc-3cd0337df5dd nodeName:}" failed. No retries permitted until 2025-10-06 14:41:39.296170609 +0000 UTC m=+7.721768604 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a3caad5a-3054-4d06-a1dc-3cd0337df5dd-config-volume") pod "coredns-668d6bf9bc-qk97j" (UID: "a3caad5a-3054-4d06-a1dc-3cd0337df5dd") : object "kube-system"/"coredns" not registered
	Oct 06 14:41:38 test-preload-907615 kubelet[1154]: E1006 14:41:38.752286    1154 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-qk97j" podUID="a3caad5a-3054-4d06-a1dc-3cd0337df5dd"
	Oct 06 14:41:39 test-preload-907615 kubelet[1154]: E1006 14:41:39.313966    1154 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 06 14:41:39 test-preload-907615 kubelet[1154]: E1006 14:41:39.314083    1154 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3caad5a-3054-4d06-a1dc-3cd0337df5dd-config-volume podName:a3caad5a-3054-4d06-a1dc-3cd0337df5dd nodeName:}" failed. No retries permitted until 2025-10-06 14:41:43.314058343 +0000 UTC m=+11.739656345 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a3caad5a-3054-4d06-a1dc-3cd0337df5dd-config-volume") pod "coredns-668d6bf9bc-qk97j" (UID: "a3caad5a-3054-4d06-a1dc-3cd0337df5dd") : object "kube-system"/"coredns" not registered
	Oct 06 14:41:40 test-preload-907615 kubelet[1154]: E1006 14:41:40.752348    1154 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-qk97j" podUID="a3caad5a-3054-4d06-a1dc-3cd0337df5dd"
	Oct 06 14:41:41 test-preload-907615 kubelet[1154]: E1006 14:41:41.765133    1154 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759761701763937399,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 06 14:41:41 test-preload-907615 kubelet[1154]: E1006 14:41:41.765176    1154 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759761701763937399,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 06 14:41:51 test-preload-907615 kubelet[1154]: E1006 14:41:51.766513    1154 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759761711766084215,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 06 14:41:51 test-preload-907615 kubelet[1154]: E1006 14:41:51.767021    1154 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759761711766084215,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [5bc914a4cdbdbd66a0fd410677e47cfec9861a542e170e020cb580fb61bdfc7d] <==
	I1006 14:41:37.045663       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1006 14:41:37.066218       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1006 14:41:37.066308       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1006 14:41:54.485047       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1006 14:41:54.485613       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_test-preload-907615_fa8488e6-6498-4753-9cfb-a8892ddceb48!
	I1006 14:41:54.485804       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a522b09b-7d2a-4097-bc28-785935b57cbc", APIVersion:"v1", ResourceVersion:"522", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' test-preload-907615_fa8488e6-6498-4753-9cfb-a8892ddceb48 became leader
	I1006 14:41:54.586257       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_test-preload-907615_fa8488e6-6498-4753-9cfb-a8892ddceb48!
	
	
	==> storage-provisioner [ea724e86d84ded7e67c86c0a052b7aec4b4e6dcb67189a9d55e981e1cfbdb6c0] <==
	I1006 14:41:36.283010       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1006 14:41:36.298575       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-907615 -n test-preload-907615
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-907615 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-907615" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-907615
--- FAIL: TestPreload (137.13s)

                                                
                                    
x
+
TestKubernetesUpgrade (931.33s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-317912 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1006 14:47:03.587038  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/functional-561811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:47:32.010289  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/addons-395535/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-317912 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (58.978637718s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-317912
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-317912: (1.795263217s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-317912 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-317912 status --format={{.Host}}: exit status 7 (80.854911ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-317912 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1006 14:47:48.923230  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/addons-395535/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-317912 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (33.37804439s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-317912 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-317912 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-317912 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 106 (95.47054ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-317912] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21701
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21701-739942/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-739942/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-317912
	    minikube start -p kubernetes-upgrade-317912 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3179122 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-317912 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-317912 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:275: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-317912 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 80 (13m54.809987454s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-317912] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21701
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21701-739942/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-739942/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "kubernetes-upgrade-317912" primary control-plane node in "kubernetes-upgrade-317912" cluster
	* Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 14:48:20.866921  781281 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:48:20.867064  781281 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:48:20.867074  781281 out.go:374] Setting ErrFile to fd 2...
	I1006 14:48:20.867080  781281 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:48:20.867311  781281 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-739942/.minikube/bin
	I1006 14:48:20.867776  781281 out.go:368] Setting JSON to false
	I1006 14:48:20.868763  781281 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":16252,"bootTime":1759745849,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 14:48:20.868857  781281 start.go:140] virtualization: kvm guest
	I1006 14:48:20.872739  781281 out.go:179] * [kubernetes-upgrade-317912] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1006 14:48:20.874688  781281 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 14:48:20.874696  781281 notify.go:220] Checking for updates...
	I1006 14:48:20.877544  781281 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 14:48:20.879062  781281 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-739942/kubeconfig
	I1006 14:48:20.880416  781281 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-739942/.minikube
	I1006 14:48:20.882688  781281 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1006 14:48:20.884104  781281 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 14:48:20.886141  781281 config.go:182] Loaded profile config "kubernetes-upgrade-317912": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:48:20.886848  781281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 14:48:20.886939  781281 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 14:48:20.907854  781281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35981
	I1006 14:48:20.908420  781281 main.go:141] libmachine: () Calling .GetVersion
	I1006 14:48:20.909080  781281 main.go:141] libmachine: Using API Version  1
	I1006 14:48:20.909096  781281 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 14:48:20.909573  781281 main.go:141] libmachine: () Calling .GetMachineName
	I1006 14:48:20.909847  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .DriverName
	I1006 14:48:20.910223  781281 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 14:48:20.910573  781281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 14:48:20.910662  781281 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 14:48:20.927253  781281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41247
	I1006 14:48:20.927855  781281 main.go:141] libmachine: () Calling .GetVersion
	I1006 14:48:20.928448  781281 main.go:141] libmachine: Using API Version  1
	I1006 14:48:20.928513  781281 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 14:48:20.928976  781281 main.go:141] libmachine: () Calling .GetMachineName
	I1006 14:48:20.929185  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .DriverName
	I1006 14:48:20.970111  781281 out.go:179] * Using the kvm2 driver based on existing profile
	I1006 14:48:20.971316  781281 start.go:304] selected driver: kvm2
	I1006 14:48:20.971335  781281 start.go:924] validating driver "kvm2" against &{Name:kubernetes-upgrade-317912 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-317912 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.45 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:48:20.971458  781281 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 14:48:20.972470  781281 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 14:48:20.972564  781281 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21701-739942/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1006 14:48:20.990143  781281 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1006 14:48:20.990190  781281 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21701-739942/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1006 14:48:21.005348  781281 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1006 14:48:21.005831  781281 cni.go:84] Creating CNI manager for ""
	I1006 14:48:21.005898  781281 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1006 14:48:21.005945  781281 start.go:348] cluster config:
	{Name:kubernetes-upgrade-317912 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-317912 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.45 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:48:21.006097  781281 iso.go:125] acquiring lock: {Name:mk8de6812bb58933af0bc6eb1d955bf118a3bcec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 14:48:21.007933  781281 out.go:179] * Starting "kubernetes-upgrade-317912" primary control-plane node in "kubernetes-upgrade-317912" cluster
	I1006 14:48:21.009215  781281 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:48:21.009261  781281 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-739942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1006 14:48:21.009275  781281 cache.go:58] Caching tarball of preloaded images
	I1006 14:48:21.009366  781281 preload.go:233] Found /home/jenkins/minikube-integration/21701-739942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1006 14:48:21.009376  781281 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1006 14:48:21.009470  781281 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/kubernetes-upgrade-317912/config.json ...
	I1006 14:48:21.009693  781281 start.go:360] acquireMachinesLock for kubernetes-upgrade-317912: {Name:mkc5be1cfc8fcefa1839aef4c67a376cc5095e30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1006 14:48:21.009749  781281 start.go:364] duration metric: took 34.159µs to acquireMachinesLock for "kubernetes-upgrade-317912"
	I1006 14:48:21.009771  781281 start.go:96] Skipping create...Using existing machine configuration
	I1006 14:48:21.009781  781281 fix.go:54] fixHost starting: 
	I1006 14:48:21.010066  781281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 14:48:21.010113  781281 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 14:48:21.024843  781281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46813
	I1006 14:48:21.025497  781281 main.go:141] libmachine: () Calling .GetVersion
	I1006 14:48:21.026123  781281 main.go:141] libmachine: Using API Version  1
	I1006 14:48:21.026153  781281 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 14:48:21.026518  781281 main.go:141] libmachine: () Calling .GetMachineName
	I1006 14:48:21.026738  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .DriverName
	I1006 14:48:21.026895  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetState
	I1006 14:48:21.028987  781281 fix.go:112] recreateIfNeeded on kubernetes-upgrade-317912: state=Running err=<nil>
	W1006 14:48:21.029019  781281 fix.go:138] unexpected machine state, will restart: <nil>
	I1006 14:48:21.030857  781281 out.go:252] * Updating the running kvm2 "kubernetes-upgrade-317912" VM ...
	I1006 14:48:21.030892  781281 machine.go:93] provisionDockerMachine start ...
	I1006 14:48:21.030911  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .DriverName
	I1006 14:48:21.031144  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHHostname
	I1006 14:48:21.034195  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | domain kubernetes-upgrade-317912 has defined MAC address 52:54:00:db:d0:2e in network mk-kubernetes-upgrade-317912
	I1006 14:48:21.034829  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:d0:2e", ip: ""} in network mk-kubernetes-upgrade-317912: {Iface:virbr1 ExpiryTime:2025-10-06 15:47:59 +0000 UTC Type:0 Mac:52:54:00:db:d0:2e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:kubernetes-upgrade-317912 Clientid:01:52:54:00:db:d0:2e}
	I1006 14:48:21.034858  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | domain kubernetes-upgrade-317912 has defined IP address 192.168.39.45 and MAC address 52:54:00:db:d0:2e in network mk-kubernetes-upgrade-317912
	I1006 14:48:21.035116  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHPort
	I1006 14:48:21.035295  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHKeyPath
	I1006 14:48:21.035453  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHKeyPath
	I1006 14:48:21.035650  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHUsername
	I1006 14:48:21.035857  781281 main.go:141] libmachine: Using SSH client type: native
	I1006 14:48:21.036194  781281 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I1006 14:48:21.036210  781281 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 14:48:21.171949  781281 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-317912
	
	I1006 14:48:21.171977  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetMachineName
	I1006 14:48:21.172256  781281 buildroot.go:166] provisioning hostname "kubernetes-upgrade-317912"
	I1006 14:48:21.172296  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetMachineName
	I1006 14:48:21.172472  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHHostname
	I1006 14:48:21.176408  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | domain kubernetes-upgrade-317912 has defined MAC address 52:54:00:db:d0:2e in network mk-kubernetes-upgrade-317912
	I1006 14:48:21.176940  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:d0:2e", ip: ""} in network mk-kubernetes-upgrade-317912: {Iface:virbr1 ExpiryTime:2025-10-06 15:47:59 +0000 UTC Type:0 Mac:52:54:00:db:d0:2e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:kubernetes-upgrade-317912 Clientid:01:52:54:00:db:d0:2e}
	I1006 14:48:21.176982  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | domain kubernetes-upgrade-317912 has defined IP address 192.168.39.45 and MAC address 52:54:00:db:d0:2e in network mk-kubernetes-upgrade-317912
	I1006 14:48:21.177412  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHPort
	I1006 14:48:21.177684  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHKeyPath
	I1006 14:48:21.177896  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHKeyPath
	I1006 14:48:21.178076  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHUsername
	I1006 14:48:21.178257  781281 main.go:141] libmachine: Using SSH client type: native
	I1006 14:48:21.178545  781281 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I1006 14:48:21.178564  781281 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-317912 && echo "kubernetes-upgrade-317912" | sudo tee /etc/hostname
	I1006 14:48:21.369751  781281 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-317912
	
	I1006 14:48:21.369795  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHHostname
	I1006 14:48:21.373455  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | domain kubernetes-upgrade-317912 has defined MAC address 52:54:00:db:d0:2e in network mk-kubernetes-upgrade-317912
	I1006 14:48:21.374139  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:d0:2e", ip: ""} in network mk-kubernetes-upgrade-317912: {Iface:virbr1 ExpiryTime:2025-10-06 15:47:59 +0000 UTC Type:0 Mac:52:54:00:db:d0:2e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:kubernetes-upgrade-317912 Clientid:01:52:54:00:db:d0:2e}
	I1006 14:48:21.374181  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | domain kubernetes-upgrade-317912 has defined IP address 192.168.39.45 and MAC address 52:54:00:db:d0:2e in network mk-kubernetes-upgrade-317912
	I1006 14:48:21.374689  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHPort
	I1006 14:48:21.375002  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHKeyPath
	I1006 14:48:21.375223  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHKeyPath
	I1006 14:48:21.375407  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHUsername
	I1006 14:48:21.375625  781281 main.go:141] libmachine: Using SSH client type: native
	I1006 14:48:21.375896  781281 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I1006 14:48:21.375914  781281 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-317912' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-317912/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-317912' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 14:48:21.504875  781281 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 14:48:21.504921  781281 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21701-739942/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-739942/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-739942/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-739942/.minikube}
	I1006 14:48:21.505001  781281 buildroot.go:174] setting up certificates
	I1006 14:48:21.505018  781281 provision.go:84] configureAuth start
	I1006 14:48:21.505037  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetMachineName
	I1006 14:48:21.505368  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetIP
	I1006 14:48:21.509414  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | domain kubernetes-upgrade-317912 has defined MAC address 52:54:00:db:d0:2e in network mk-kubernetes-upgrade-317912
	I1006 14:48:21.509947  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:d0:2e", ip: ""} in network mk-kubernetes-upgrade-317912: {Iface:virbr1 ExpiryTime:2025-10-06 15:47:59 +0000 UTC Type:0 Mac:52:54:00:db:d0:2e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:kubernetes-upgrade-317912 Clientid:01:52:54:00:db:d0:2e}
	I1006 14:48:21.510019  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | domain kubernetes-upgrade-317912 has defined IP address 192.168.39.45 and MAC address 52:54:00:db:d0:2e in network mk-kubernetes-upgrade-317912
	I1006 14:48:21.510258  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHHostname
	I1006 14:48:21.513658  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | domain kubernetes-upgrade-317912 has defined MAC address 52:54:00:db:d0:2e in network mk-kubernetes-upgrade-317912
	I1006 14:48:21.514137  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:d0:2e", ip: ""} in network mk-kubernetes-upgrade-317912: {Iface:virbr1 ExpiryTime:2025-10-06 15:47:59 +0000 UTC Type:0 Mac:52:54:00:db:d0:2e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:kubernetes-upgrade-317912 Clientid:01:52:54:00:db:d0:2e}
	I1006 14:48:21.514184  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | domain kubernetes-upgrade-317912 has defined IP address 192.168.39.45 and MAC address 52:54:00:db:d0:2e in network mk-kubernetes-upgrade-317912
	I1006 14:48:21.514298  781281 provision.go:143] copyHostCerts
	I1006 14:48:21.514363  781281 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-739942/.minikube/ca.pem, removing ...
	I1006 14:48:21.514392  781281 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-739942/.minikube/ca.pem
	I1006 14:48:21.514485  781281 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-739942/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-739942/.minikube/ca.pem (1078 bytes)
	I1006 14:48:21.514701  781281 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-739942/.minikube/cert.pem, removing ...
	I1006 14:48:21.514715  781281 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-739942/.minikube/cert.pem
	I1006 14:48:21.514750  781281 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-739942/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-739942/.minikube/cert.pem (1123 bytes)
	I1006 14:48:21.514821  781281 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-739942/.minikube/key.pem, removing ...
	I1006 14:48:21.514829  781281 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-739942/.minikube/key.pem
	I1006 14:48:21.514853  781281 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-739942/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-739942/.minikube/key.pem (1679 bytes)
	I1006 14:48:21.514913  781281 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-739942/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-739942/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-739942/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-317912 san=[127.0.0.1 192.168.39.45 kubernetes-upgrade-317912 localhost minikube]
	I1006 14:48:22.246649  781281 provision.go:177] copyRemoteCerts
	I1006 14:48:22.246719  781281 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 14:48:22.246753  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHHostname
	I1006 14:48:22.250988  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | domain kubernetes-upgrade-317912 has defined MAC address 52:54:00:db:d0:2e in network mk-kubernetes-upgrade-317912
	I1006 14:48:22.471758  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:d0:2e", ip: ""} in network mk-kubernetes-upgrade-317912: {Iface:virbr1 ExpiryTime:2025-10-06 15:47:59 +0000 UTC Type:0 Mac:52:54:00:db:d0:2e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:kubernetes-upgrade-317912 Clientid:01:52:54:00:db:d0:2e}
	I1006 14:48:22.471794  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | domain kubernetes-upgrade-317912 has defined IP address 192.168.39.45 and MAC address 52:54:00:db:d0:2e in network mk-kubernetes-upgrade-317912
	I1006 14:48:22.472272  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHPort
	I1006 14:48:22.472621  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHKeyPath
	I1006 14:48:22.472857  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHUsername
	I1006 14:48:22.473093  781281 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/kubernetes-upgrade-317912/id_rsa Username:docker}
	I1006 14:48:22.599734  781281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1006 14:48:22.672701  781281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1006 14:48:22.756733  781281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1006 14:48:22.831811  781281 provision.go:87] duration metric: took 1.326772559s to configureAuth
	I1006 14:48:22.831859  781281 buildroot.go:189] setting minikube options for container-runtime
	I1006 14:48:22.832114  781281 config.go:182] Loaded profile config "kubernetes-upgrade-317912": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:48:22.832256  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHHostname
	I1006 14:48:22.836152  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | domain kubernetes-upgrade-317912 has defined MAC address 52:54:00:db:d0:2e in network mk-kubernetes-upgrade-317912
	I1006 14:48:22.836701  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:d0:2e", ip: ""} in network mk-kubernetes-upgrade-317912: {Iface:virbr1 ExpiryTime:2025-10-06 15:47:59 +0000 UTC Type:0 Mac:52:54:00:db:d0:2e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:kubernetes-upgrade-317912 Clientid:01:52:54:00:db:d0:2e}
	I1006 14:48:22.836750  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | domain kubernetes-upgrade-317912 has defined IP address 192.168.39.45 and MAC address 52:54:00:db:d0:2e in network mk-kubernetes-upgrade-317912
	I1006 14:48:22.837055  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHPort
	I1006 14:48:22.837297  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHKeyPath
	I1006 14:48:22.837492  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHKeyPath
	I1006 14:48:22.837665  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHUsername
	I1006 14:48:22.837902  781281 main.go:141] libmachine: Using SSH client type: native
	I1006 14:48:22.838120  781281 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I1006 14:48:22.838136  781281 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 14:48:23.988855  781281 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 14:48:23.988892  781281 machine.go:96] duration metric: took 2.957989013s to provisionDockerMachine
	I1006 14:48:23.988910  781281 start.go:293] postStartSetup for "kubernetes-upgrade-317912" (driver="kvm2")
	I1006 14:48:23.988954  781281 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 14:48:23.989004  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .DriverName
	I1006 14:48:23.989410  781281 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 14:48:23.989467  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHHostname
	I1006 14:48:23.993148  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | domain kubernetes-upgrade-317912 has defined MAC address 52:54:00:db:d0:2e in network mk-kubernetes-upgrade-317912
	I1006 14:48:23.993758  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:d0:2e", ip: ""} in network mk-kubernetes-upgrade-317912: {Iface:virbr1 ExpiryTime:2025-10-06 15:47:59 +0000 UTC Type:0 Mac:52:54:00:db:d0:2e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:kubernetes-upgrade-317912 Clientid:01:52:54:00:db:d0:2e}
	I1006 14:48:23.993793  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | domain kubernetes-upgrade-317912 has defined IP address 192.168.39.45 and MAC address 52:54:00:db:d0:2e in network mk-kubernetes-upgrade-317912
	I1006 14:48:23.993988  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHPort
	I1006 14:48:23.994216  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHKeyPath
	I1006 14:48:23.994393  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHUsername
	I1006 14:48:23.994649  781281 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/kubernetes-upgrade-317912/id_rsa Username:docker}
	I1006 14:48:24.129904  781281 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 14:48:24.139427  781281 info.go:137] Remote host: Buildroot 2025.02
	I1006 14:48:24.139539  781281 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-739942/.minikube/addons for local assets ...
	I1006 14:48:24.139639  781281 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-739942/.minikube/files for local assets ...
	I1006 14:48:24.139780  781281 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-739942/.minikube/files/etc/ssl/certs/7438512.pem -> 7438512.pem in /etc/ssl/certs
	I1006 14:48:24.139924  781281 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1006 14:48:24.177944  781281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/files/etc/ssl/certs/7438512.pem --> /etc/ssl/certs/7438512.pem (1708 bytes)
	I1006 14:48:24.249784  781281 start.go:296] duration metric: took 260.85658ms for postStartSetup
	I1006 14:48:24.249867  781281 fix.go:56] duration metric: took 3.240057002s for fixHost
	I1006 14:48:24.249896  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHHostname
	I1006 14:48:24.253541  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | domain kubernetes-upgrade-317912 has defined MAC address 52:54:00:db:d0:2e in network mk-kubernetes-upgrade-317912
	I1006 14:48:24.254015  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:d0:2e", ip: ""} in network mk-kubernetes-upgrade-317912: {Iface:virbr1 ExpiryTime:2025-10-06 15:47:59 +0000 UTC Type:0 Mac:52:54:00:db:d0:2e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:kubernetes-upgrade-317912 Clientid:01:52:54:00:db:d0:2e}
	I1006 14:48:24.254059  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | domain kubernetes-upgrade-317912 has defined IP address 192.168.39.45 and MAC address 52:54:00:db:d0:2e in network mk-kubernetes-upgrade-317912
	I1006 14:48:24.254287  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHPort
	I1006 14:48:24.254562  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHKeyPath
	I1006 14:48:24.254792  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHKeyPath
	I1006 14:48:24.254972  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHUsername
	I1006 14:48:24.255240  781281 main.go:141] libmachine: Using SSH client type: native
	I1006 14:48:24.255541  781281 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I1006 14:48:24.255562  781281 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1006 14:48:24.443570  781281 main.go:141] libmachine: SSH cmd err, output: <nil>: 1759762104.437038277
	
	I1006 14:48:24.443626  781281 fix.go:216] guest clock: 1759762104.437038277
	I1006 14:48:24.443637  781281 fix.go:229] Guest: 2025-10-06 14:48:24.437038277 +0000 UTC Remote: 2025-10-06 14:48:24.249873501 +0000 UTC m=+3.426269664 (delta=187.164776ms)
	I1006 14:48:24.443666  781281 fix.go:200] guest clock delta is within tolerance: 187.164776ms
	I1006 14:48:24.443672  781281 start.go:83] releasing machines lock for "kubernetes-upgrade-317912", held for 3.433910698s
	I1006 14:48:24.443703  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .DriverName
	I1006 14:48:24.444039  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetIP
	I1006 14:48:24.447844  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | domain kubernetes-upgrade-317912 has defined MAC address 52:54:00:db:d0:2e in network mk-kubernetes-upgrade-317912
	I1006 14:48:24.448323  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:d0:2e", ip: ""} in network mk-kubernetes-upgrade-317912: {Iface:virbr1 ExpiryTime:2025-10-06 15:47:59 +0000 UTC Type:0 Mac:52:54:00:db:d0:2e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:kubernetes-upgrade-317912 Clientid:01:52:54:00:db:d0:2e}
	I1006 14:48:24.448376  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | domain kubernetes-upgrade-317912 has defined IP address 192.168.39.45 and MAC address 52:54:00:db:d0:2e in network mk-kubernetes-upgrade-317912
	I1006 14:48:24.448681  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .DriverName
	I1006 14:48:24.449472  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .DriverName
	I1006 14:48:24.449729  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .DriverName
	I1006 14:48:24.449855  781281 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 14:48:24.449901  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHHostname
	I1006 14:48:24.450020  781281 ssh_runner.go:195] Run: cat /version.json
	I1006 14:48:24.450052  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHHostname
	I1006 14:48:24.453501  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | domain kubernetes-upgrade-317912 has defined MAC address 52:54:00:db:d0:2e in network mk-kubernetes-upgrade-317912
	I1006 14:48:24.453598  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | domain kubernetes-upgrade-317912 has defined MAC address 52:54:00:db:d0:2e in network mk-kubernetes-upgrade-317912
	I1006 14:48:24.454061  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:d0:2e", ip: ""} in network mk-kubernetes-upgrade-317912: {Iface:virbr1 ExpiryTime:2025-10-06 15:47:59 +0000 UTC Type:0 Mac:52:54:00:db:d0:2e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:kubernetes-upgrade-317912 Clientid:01:52:54:00:db:d0:2e}
	I1006 14:48:24.454098  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:d0:2e", ip: ""} in network mk-kubernetes-upgrade-317912: {Iface:virbr1 ExpiryTime:2025-10-06 15:47:59 +0000 UTC Type:0 Mac:52:54:00:db:d0:2e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:kubernetes-upgrade-317912 Clientid:01:52:54:00:db:d0:2e}
	I1006 14:48:24.454138  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | domain kubernetes-upgrade-317912 has defined IP address 192.168.39.45 and MAC address 52:54:00:db:d0:2e in network mk-kubernetes-upgrade-317912
	I1006 14:48:24.454155  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | domain kubernetes-upgrade-317912 has defined IP address 192.168.39.45 and MAC address 52:54:00:db:d0:2e in network mk-kubernetes-upgrade-317912
	I1006 14:48:24.454451  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHPort
	I1006 14:48:24.454664  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHPort
	I1006 14:48:24.454673  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHKeyPath
	I1006 14:48:24.454933  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHUsername
	I1006 14:48:24.454952  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHKeyPath
	I1006 14:48:24.455163  781281 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/kubernetes-upgrade-317912/id_rsa Username:docker}
	I1006 14:48:24.455223  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHUsername
	I1006 14:48:24.455433  781281 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/kubernetes-upgrade-317912/id_rsa Username:docker}
	I1006 14:48:24.644428  781281 ssh_runner.go:195] Run: systemctl --version
	I1006 14:48:24.661947  781281 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 14:48:24.911132  781281 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 14:48:24.935666  781281 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 14:48:24.935746  781281 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 14:48:24.965417  781281 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1006 14:48:24.965448  781281 start.go:495] detecting cgroup driver to use...
	I1006 14:48:24.965538  781281 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 14:48:25.009626  781281 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 14:48:25.047603  781281 docker.go:218] disabling cri-docker service (if available) ...
	I1006 14:48:25.047785  781281 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 14:48:25.142428  781281 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 14:48:25.187799  781281 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 14:48:25.611787  781281 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 14:48:25.895358  781281 docker.go:234] disabling docker service ...
	I1006 14:48:25.895452  781281 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 14:48:25.929546  781281 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 14:48:25.953038  781281 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 14:48:26.184851  781281 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 14:48:26.388894  781281 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 14:48:26.428225  781281 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 14:48:26.455713  781281 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1006 14:48:26.455780  781281 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:48:26.471571  781281 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1006 14:48:26.471662  781281 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:48:26.487740  781281 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:48:26.504412  781281 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:48:26.520694  781281 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 14:48:26.538847  781281 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:48:26.556429  781281 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:48:26.577085  781281 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:48:26.592493  781281 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 14:48:26.607223  781281 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 14:48:26.621468  781281 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:48:26.826468  781281 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 14:49:57.313930  781281 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.487411994s)
	I1006 14:49:57.313976  781281 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 14:49:57.314063  781281 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 14:49:57.321769  781281 start.go:563] Will wait 60s for crictl version
	I1006 14:49:57.321863  781281 ssh_runner.go:195] Run: which crictl
	I1006 14:49:57.327343  781281 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1006 14:49:57.387180  781281 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1006 14:49:57.387292  781281 ssh_runner.go:195] Run: crio --version
	I1006 14:49:57.427610  781281 ssh_runner.go:195] Run: crio --version
	I1006 14:49:57.476402  781281 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1006 14:49:57.477854  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetIP
	I1006 14:49:57.482085  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | domain kubernetes-upgrade-317912 has defined MAC address 52:54:00:db:d0:2e in network mk-kubernetes-upgrade-317912
	I1006 14:49:57.482699  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:d0:2e", ip: ""} in network mk-kubernetes-upgrade-317912: {Iface:virbr1 ExpiryTime:2025-10-06 15:47:59 +0000 UTC Type:0 Mac:52:54:00:db:d0:2e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:kubernetes-upgrade-317912 Clientid:01:52:54:00:db:d0:2e}
	I1006 14:49:57.482738  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | domain kubernetes-upgrade-317912 has defined IP address 192.168.39.45 and MAC address 52:54:00:db:d0:2e in network mk-kubernetes-upgrade-317912
	I1006 14:49:57.483123  781281 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1006 14:49:57.488966  781281 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-317912 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.34.1 ClusterName:kubernetes-upgrade-317912 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.45 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 14:49:57.489288  781281 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:49:57.489383  781281 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 14:49:57.562054  781281 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 14:49:57.562086  781281 crio.go:433] Images already preloaded, skipping extraction
	I1006 14:49:57.562154  781281 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 14:49:57.616510  781281 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 14:49:57.616540  781281 cache_images.go:85] Images are preloaded, skipping loading
	I1006 14:49:57.616548  781281 kubeadm.go:934] updating node { 192.168.39.45 8443 v1.34.1 crio true true} ...
	I1006 14:49:57.616704  781281 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-317912 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.45
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-317912 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 14:49:57.616796  781281 ssh_runner.go:195] Run: crio config
	I1006 14:49:57.698376  781281 cni.go:84] Creating CNI manager for ""
	I1006 14:49:57.698410  781281 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1006 14:49:57.698436  781281 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 14:49:57.698463  781281 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.45 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-317912 NodeName:kubernetes-upgrade-317912 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.45"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.45 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 14:49:57.698716  781281 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.45
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-317912"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.45"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.45"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 14:49:57.698799  781281 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 14:49:57.718843  781281 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 14:49:57.718992  781281 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 14:49:57.737078  781281 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I1006 14:49:57.764022  781281 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 14:49:57.791772  781281 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1006 14:49:57.820417  781281 ssh_runner.go:195] Run: grep 192.168.39.45	control-plane.minikube.internal$ /etc/hosts
	I1006 14:49:57.826857  781281 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:49:58.121767  781281 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 14:49:58.148054  781281 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/kubernetes-upgrade-317912 for IP: 192.168.39.45
	I1006 14:49:58.148087  781281 certs.go:195] generating shared ca certs ...
	I1006 14:49:58.148145  781281 certs.go:227] acquiring lock for ca certs: {Name:mkac26b60e1fd10143a5d4dc5ca0de64e9dd4f98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:49:58.148391  781281 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-739942/.minikube/ca.key
	I1006 14:49:58.148457  781281 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-739942/.minikube/proxy-client-ca.key
	I1006 14:49:58.148471  781281 certs.go:257] generating profile certs ...
	I1006 14:49:58.148643  781281 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/kubernetes-upgrade-317912/client.key
	I1006 14:49:58.148715  781281 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/kubernetes-upgrade-317912/apiserver.key.0c02819d
	I1006 14:49:58.148765  781281 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/kubernetes-upgrade-317912/proxy-client.key
	I1006 14:49:58.148921  781281 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-739942/.minikube/certs/743851.pem (1338 bytes)
	W1006 14:49:58.149007  781281 certs.go:480] ignoring /home/jenkins/minikube-integration/21701-739942/.minikube/certs/743851_empty.pem, impossibly tiny 0 bytes
	I1006 14:49:58.149021  781281 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-739942/.minikube/certs/ca-key.pem (1679 bytes)
	I1006 14:49:58.149054  781281 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-739942/.minikube/certs/ca.pem (1078 bytes)
	I1006 14:49:58.149114  781281 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-739942/.minikube/certs/cert.pem (1123 bytes)
	I1006 14:49:58.149147  781281 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-739942/.minikube/certs/key.pem (1679 bytes)
	I1006 14:49:58.149207  781281 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-739942/.minikube/files/etc/ssl/certs/7438512.pem (1708 bytes)
	I1006 14:49:58.150090  781281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 14:49:58.196059  781281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1006 14:49:58.244625  781281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 14:49:58.287156  781281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1006 14:49:58.328309  781281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/kubernetes-upgrade-317912/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1006 14:49:58.370431  781281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/kubernetes-upgrade-317912/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1006 14:49:58.414806  781281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/kubernetes-upgrade-317912/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 14:49:58.453411  781281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/kubernetes-upgrade-317912/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1006 14:49:58.508406  781281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/files/etc/ssl/certs/7438512.pem --> /usr/share/ca-certificates/7438512.pem (1708 bytes)
	I1006 14:49:58.555322  781281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 14:49:58.604060  781281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/certs/743851.pem --> /usr/share/ca-certificates/743851.pem (1338 bytes)
	I1006 14:49:58.649771  781281 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 14:49:58.679395  781281 ssh_runner.go:195] Run: openssl version
	I1006 14:49:58.690104  781281 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7438512.pem && ln -fs /usr/share/ca-certificates/7438512.pem /etc/ssl/certs/7438512.pem"
	I1006 14:49:58.711015  781281 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7438512.pem
	I1006 14:49:58.720621  781281 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  6 13:59 /usr/share/ca-certificates/7438512.pem
	I1006 14:49:58.720732  781281 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7438512.pem
	I1006 14:49:58.734176  781281 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7438512.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 14:49:58.753691  781281 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 14:49:58.777090  781281 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:49:58.786152  781281 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 13:50 /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:49:58.786266  781281 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:49:58.798435  781281 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 14:49:58.818381  781281 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/743851.pem && ln -fs /usr/share/ca-certificates/743851.pem /etc/ssl/certs/743851.pem"
	I1006 14:49:58.839770  781281 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/743851.pem
	I1006 14:49:58.848827  781281 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  6 13:59 /usr/share/ca-certificates/743851.pem
	I1006 14:49:58.848911  781281 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/743851.pem
	I1006 14:49:58.860378  781281 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/743851.pem /etc/ssl/certs/51391683.0"
	I1006 14:49:58.875045  781281 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 14:49:58.882027  781281 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1006 14:49:58.894399  781281 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1006 14:49:58.903997  781281 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1006 14:49:58.912701  781281 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1006 14:49:58.921434  781281 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1006 14:49:58.929743  781281 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1006 14:49:58.938735  781281 kubeadm.go:400] StartCluster: {Name:kubernetes-upgrade-317912 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.34.1 ClusterName:kubernetes-upgrade-317912 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.45 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:49:58.938827  781281 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 14:49:58.938935  781281 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 14:49:58.990422  781281 cri.go:89] found id: "4a390ad4199c06a5adb2dea2ab0929a79014ff7070629392e84f3a7c47815730"
	I1006 14:49:58.990455  781281 cri.go:89] found id: "204e325c74d0e06f3e26842e7b25b5930ced734863c1dcb470f51cd8bc64ad1e"
	I1006 14:49:58.990462  781281 cri.go:89] found id: "0af1b838622ee1a5eda96907e5e517d5e52491e4083264ed946ed1283c58dfe7"
	I1006 14:49:58.990468  781281 cri.go:89] found id: "d164899b0f7aa467689d749846c99cf9d0bb84fe4362146e3847987a62a6bf1d"
	I1006 14:49:58.990473  781281 cri.go:89] found id: "65d6e616a562e8e4ba329cc3db1c10037a8e7641ecab8e80e2924ddb944bb650"
	I1006 14:49:58.990478  781281 cri.go:89] found id: "d33640ebc9bb7a04a17eafaeb6b34d9d23ca5752aaff3a85bb0e934c48fa5fef"
	I1006 14:49:58.990484  781281 cri.go:89] found id: "4d6a8e4f900990969f60cda86ac7d6efdba66933493eab878c61b19b95b775b5"
	I1006 14:49:58.990488  781281 cri.go:89] found id: "7f65ba47bf4ea068d1fa9a636d3117b0e6edf44aad7029c936aaee97b521ac0d"
	I1006 14:49:58.990492  781281 cri.go:89] found id: ""
	I1006 14:49:58.990556  781281 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
version_upgrade_test.go:277: start after failed upgrade: out/minikube-linux-amd64 start -p kubernetes-upgrade-317912 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 80
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2025-10-06 15:02:15.639333646 +0000 UTC m=+4322.886965418
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestKubernetesUpgrade]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-317912 -n kubernetes-upgrade-317912
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-317912 -n kubernetes-upgrade-317912: exit status 2 (243.77279ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-317912 logs -n 25
helpers_test.go:260: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬──────────
───────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                  ARGS                                                                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼──────────
───────────┼─────────────────────┤
	│ image   │ embed-certs-203704 image list --format=json                                                                                                                                                                                                                             │ embed-certs-203704           │ jenkins │ v1.37.0 │ 06 Oct 25 14:58 UTC │ 06 Oct 25 14:58 UTC │
	│ pause   │ -p embed-certs-203704 --alsologtostderr -v=1                                                                                                                                                                                                                            │ embed-certs-203704           │ jenkins │ v1.37.0 │ 06 Oct 25 14:58 UTC │ 06 Oct 25 14:58 UTC │
	│ unpause │ -p no-preload-764807 --alsologtostderr -v=1                                                                                                                                                                                                                             │ no-preload-764807            │ jenkins │ v1.37.0 │ 06 Oct 25 14:58 UTC │ 06 Oct 25 14:58 UTC │
	│ unpause │ -p embed-certs-203704 --alsologtostderr -v=1                                                                                                                                                                                                                            │ embed-certs-203704           │ jenkins │ v1.37.0 │ 06 Oct 25 14:58 UTC │ 06 Oct 25 14:58 UTC │
	│ delete  │ -p no-preload-764807                                                                                                                                                                                                                                                    │ no-preload-764807            │ jenkins │ v1.37.0 │ 06 Oct 25 14:58 UTC │ 06 Oct 25 14:58 UTC │
	│ delete  │ -p no-preload-764807                                                                                                                                                                                                                                                    │ no-preload-764807            │ jenkins │ v1.37.0 │ 06 Oct 25 14:58 UTC │ 06 Oct 25 14:58 UTC │
	│ start   │ -p newest-cni-320304 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1 │ newest-cni-320304            │ jenkins │ v1.37.0 │ 06 Oct 25 14:58 UTC │ 06 Oct 25 14:59 UTC │
	│ delete  │ -p embed-certs-203704                                                                                                                                                                                                                                                   │ embed-certs-203704           │ jenkins │ v1.37.0 │ 06 Oct 25 14:58 UTC │ 06 Oct 25 14:58 UTC │
	│ delete  │ -p embed-certs-203704                                                                                                                                                                                                                                                   │ embed-certs-203704           │ jenkins │ v1.37.0 │ 06 Oct 25 14:58 UTC │ 06 Oct 25 14:58 UTC │
	│ addons  │ enable metrics-server -p newest-cni-320304 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                                 │ newest-cni-320304            │ jenkins │ v1.37.0 │ 06 Oct 25 14:59 UTC │ 06 Oct 25 14:59 UTC │
	│ stop    │ -p newest-cni-320304 --alsologtostderr -v=3                                                                                                                                                                                                                             │ newest-cni-320304            │ jenkins │ v1.37.0 │ 06 Oct 25 14:59 UTC │ 06 Oct 25 15:00 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-915964 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                                 │ default-k8s-diff-port-915964 │ jenkins │ v1.37.0 │ 06 Oct 25 14:59 UTC │ 06 Oct 25 14:59 UTC │
	│ start   │ -p default-k8s-diff-port-915964 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-915964 │ jenkins │ v1.37.0 │ 06 Oct 25 14:59 UTC │ 06 Oct 25 15:00 UTC │
	│ image   │ default-k8s-diff-port-915964 image list --format=json                                                                                                                                                                                                                   │ default-k8s-diff-port-915964 │ jenkins │ v1.37.0 │ 06 Oct 25 15:00 UTC │ 06 Oct 25 15:00 UTC │
	│ pause   │ -p default-k8s-diff-port-915964 --alsologtostderr -v=1                                                                                                                                                                                                                  │ default-k8s-diff-port-915964 │ jenkins │ v1.37.0 │ 06 Oct 25 15:00 UTC │ 06 Oct 25 15:00 UTC │
	│ unpause │ -p default-k8s-diff-port-915964 --alsologtostderr -v=1                                                                                                                                                                                                                  │ default-k8s-diff-port-915964 │ jenkins │ v1.37.0 │ 06 Oct 25 15:00 UTC │ 06 Oct 25 15:00 UTC │
	│ delete  │ -p default-k8s-diff-port-915964                                                                                                                                                                                                                                         │ default-k8s-diff-port-915964 │ jenkins │ v1.37.0 │ 06 Oct 25 15:00 UTC │ 06 Oct 25 15:00 UTC │
	│ delete  │ -p default-k8s-diff-port-915964                                                                                                                                                                                                                                         │ default-k8s-diff-port-915964 │ jenkins │ v1.37.0 │ 06 Oct 25 15:00 UTC │ 06 Oct 25 15:00 UTC │
	│ addons  │ enable dashboard -p newest-cni-320304 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                                            │ newest-cni-320304            │ jenkins │ v1.37.0 │ 06 Oct 25 15:00 UTC │ 06 Oct 25 15:00 UTC │
	│ start   │ -p newest-cni-320304 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1 │ newest-cni-320304            │ jenkins │ v1.37.0 │ 06 Oct 25 15:00 UTC │ 06 Oct 25 15:01 UTC │
	│ image   │ newest-cni-320304 image list --format=json                                                                                                                                                                                                                              │ newest-cni-320304            │ jenkins │ v1.37.0 │ 06 Oct 25 15:01 UTC │ 06 Oct 25 15:01 UTC │
	│ pause   │ -p newest-cni-320304 --alsologtostderr -v=1                                                                                                                                                                                                                             │ newest-cni-320304            │ jenkins │ v1.37.0 │ 06 Oct 25 15:01 UTC │ 06 Oct 25 15:01 UTC │
	│ unpause │ -p newest-cni-320304 --alsologtostderr -v=1                                                                                                                                                                                                                             │ newest-cni-320304            │ jenkins │ v1.37.0 │ 06 Oct 25 15:01 UTC │ 06 Oct 25 15:01 UTC │
	│ delete  │ -p newest-cni-320304                                                                                                                                                                                                                                                    │ newest-cni-320304            │ jenkins │ v1.37.0 │ 06 Oct 25 15:01 UTC │ 06 Oct 25 15:01 UTC │
	│ delete  │ -p newest-cni-320304                                                                                                                                                                                                                                                    │ newest-cni-320304            │ jenkins │ v1.37.0 │ 06 Oct 25 15:01 UTC │ 06 Oct 25 15:01 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴──────────
───────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 15:00:32
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 15:00:32.348644  800014 out.go:360] Setting OutFile to fd 1 ...
	I1006 15:00:32.348909  800014 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 15:00:32.348920  800014 out.go:374] Setting ErrFile to fd 2...
	I1006 15:00:32.348925  800014 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 15:00:32.349138  800014 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-739942/.minikube/bin
	I1006 15:00:32.349609  800014 out.go:368] Setting JSON to false
	I1006 15:00:32.350512  800014 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":16983,"bootTime":1759745849,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 15:00:32.350635  800014 start.go:140] virtualization: kvm guest
	I1006 15:00:32.353558  800014 out.go:179] * [newest-cni-320304] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1006 15:00:32.355345  800014 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 15:00:32.355366  800014 notify.go:220] Checking for updates...
	I1006 15:00:32.358202  800014 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 15:00:32.359710  800014 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-739942/kubeconfig
	I1006 15:00:32.361178  800014 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-739942/.minikube
	I1006 15:00:32.362676  800014 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1006 15:00:32.364133  800014 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 15:00:32.365971  800014 config.go:182] Loaded profile config "newest-cni-320304": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 15:00:32.366388  800014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 15:00:32.366473  800014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 15:00:32.380387  800014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44885
	I1006 15:00:32.380981  800014 main.go:141] libmachine: () Calling .GetVersion
	I1006 15:00:32.381618  800014 main.go:141] libmachine: Using API Version  1
	I1006 15:00:32.381651  800014 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 15:00:32.382089  800014 main.go:141] libmachine: () Calling .GetMachineName
	I1006 15:00:32.382303  800014 main.go:141] libmachine: (newest-cni-320304) Calling .DriverName
	I1006 15:00:32.382695  800014 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 15:00:32.383155  800014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 15:00:32.383221  800014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 15:00:32.397379  800014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33997
	I1006 15:00:32.398025  800014 main.go:141] libmachine: () Calling .GetVersion
	I1006 15:00:32.398550  800014 main.go:141] libmachine: Using API Version  1
	I1006 15:00:32.398576  800014 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 15:00:32.398989  800014 main.go:141] libmachine: () Calling .GetMachineName
	I1006 15:00:32.399222  800014 main.go:141] libmachine: (newest-cni-320304) Calling .DriverName
	I1006 15:00:32.434804  800014 out.go:179] * Using the kvm2 driver based on existing profile
	I1006 15:00:32.436484  800014 start.go:304] selected driver: kvm2
	I1006 15:00:32.436506  800014 start.go:924] validating driver "kvm2" against &{Name:newest-cni-320304 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:newest-cni-320304 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.239 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s
ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 15:00:32.436713  800014 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 15:00:32.437534  800014 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 15:00:32.437660  800014 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21701-739942/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1006 15:00:32.452184  800014 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1006 15:00:32.452238  800014 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21701-739942/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1006 15:00:32.466254  800014 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1006 15:00:32.466696  800014 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1006 15:00:32.466751  800014 cni.go:84] Creating CNI manager for ""
	I1006 15:00:32.466810  800014 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1006 15:00:32.466847  800014 start.go:348] cluster config:
	{Name:newest-cni-320304 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-320304 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.239 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeR
equested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 15:00:32.466953  800014 iso.go:125] acquiring lock: {Name:mk8de6812bb58933af0bc6eb1d955bf118a3bcec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 15:00:32.468800  800014 out.go:179] * Starting "newest-cni-320304" primary control-plane node in "newest-cni-320304" cluster
	I1006 15:00:32.469975  800014 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 15:00:32.470025  800014 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-739942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1006 15:00:32.470033  800014 cache.go:58] Caching tarball of preloaded images
	I1006 15:00:32.470114  800014 preload.go:233] Found /home/jenkins/minikube-integration/21701-739942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1006 15:00:32.470124  800014 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1006 15:00:32.470222  800014 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/newest-cni-320304/config.json ...
	I1006 15:00:32.470420  800014 start.go:360] acquireMachinesLock for newest-cni-320304: {Name:mkc5be1cfc8fcefa1839aef4c67a376cc5095e30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1006 15:00:32.470465  800014 start.go:364] duration metric: took 25.122µs to acquireMachinesLock for "newest-cni-320304"
	I1006 15:00:32.470480  800014 start.go:96] Skipping create...Using existing machine configuration
	I1006 15:00:32.470485  800014 fix.go:54] fixHost starting: 
	I1006 15:00:32.470783  800014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 15:00:32.470829  800014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 15:00:32.485995  800014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37083
	I1006 15:00:32.486570  800014 main.go:141] libmachine: () Calling .GetVersion
	I1006 15:00:32.487077  800014 main.go:141] libmachine: Using API Version  1
	I1006 15:00:32.487097  800014 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 15:00:32.487547  800014 main.go:141] libmachine: () Calling .GetMachineName
	I1006 15:00:32.487798  800014 main.go:141] libmachine: (newest-cni-320304) Calling .DriverName
	I1006 15:00:32.487982  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetState
	I1006 15:00:32.490460  800014 fix.go:112] recreateIfNeeded on newest-cni-320304: state=Stopped err=<nil>
	I1006 15:00:32.490494  800014 main.go:141] libmachine: (newest-cni-320304) Calling .DriverName
	W1006 15:00:32.490708  800014 fix.go:138] unexpected machine state, will restart: <nil>
	I1006 15:00:32.492723  800014 out.go:252] * Restarting existing kvm2 VM for "newest-cni-320304" ...
	I1006 15:00:32.492758  800014 main.go:141] libmachine: (newest-cni-320304) Calling .Start
	I1006 15:00:32.492948  800014 main.go:141] libmachine: (newest-cni-320304) starting domain...
	I1006 15:00:32.492980  800014 main.go:141] libmachine: (newest-cni-320304) ensuring networks are active...
	I1006 15:00:32.493917  800014 main.go:141] libmachine: (newest-cni-320304) Ensuring network default is active
	I1006 15:00:32.494604  800014 main.go:141] libmachine: (newest-cni-320304) Ensuring network mk-newest-cni-320304 is active
	I1006 15:00:32.495087  800014 main.go:141] libmachine: (newest-cni-320304) getting domain XML...
	I1006 15:00:32.496282  800014 main.go:141] libmachine: (newest-cni-320304) DBG | starting domain XML:
	I1006 15:00:32.496302  800014 main.go:141] libmachine: (newest-cni-320304) DBG | <domain type='kvm'>
	I1006 15:00:32.496314  800014 main.go:141] libmachine: (newest-cni-320304) DBG |   <name>newest-cni-320304</name>
	I1006 15:00:32.496323  800014 main.go:141] libmachine: (newest-cni-320304) DBG |   <uuid>307da8b1-d746-4fcf-a5e1-21eeaefa44db</uuid>
	I1006 15:00:32.496336  800014 main.go:141] libmachine: (newest-cni-320304) DBG |   <memory unit='KiB'>3145728</memory>
	I1006 15:00:32.496349  800014 main.go:141] libmachine: (newest-cni-320304) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I1006 15:00:32.496362  800014 main.go:141] libmachine: (newest-cni-320304) DBG |   <vcpu placement='static'>2</vcpu>
	I1006 15:00:32.496373  800014 main.go:141] libmachine: (newest-cni-320304) DBG |   <os>
	I1006 15:00:32.496390  800014 main.go:141] libmachine: (newest-cni-320304) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1006 15:00:32.496407  800014 main.go:141] libmachine: (newest-cni-320304) DBG |     <boot dev='cdrom'/>
	I1006 15:00:32.496419  800014 main.go:141] libmachine: (newest-cni-320304) DBG |     <boot dev='hd'/>
	I1006 15:00:32.496428  800014 main.go:141] libmachine: (newest-cni-320304) DBG |     <bootmenu enable='no'/>
	I1006 15:00:32.496439  800014 main.go:141] libmachine: (newest-cni-320304) DBG |   </os>
	I1006 15:00:32.496455  800014 main.go:141] libmachine: (newest-cni-320304) DBG |   <features>
	I1006 15:00:32.496467  800014 main.go:141] libmachine: (newest-cni-320304) DBG |     <acpi/>
	I1006 15:00:32.496475  800014 main.go:141] libmachine: (newest-cni-320304) DBG |     <apic/>
	I1006 15:00:32.496487  800014 main.go:141] libmachine: (newest-cni-320304) DBG |     <pae/>
	I1006 15:00:32.496496  800014 main.go:141] libmachine: (newest-cni-320304) DBG |   </features>
	I1006 15:00:32.496510  800014 main.go:141] libmachine: (newest-cni-320304) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1006 15:00:32.496517  800014 main.go:141] libmachine: (newest-cni-320304) DBG |   <clock offset='utc'/>
	I1006 15:00:32.496525  800014 main.go:141] libmachine: (newest-cni-320304) DBG |   <on_poweroff>destroy</on_poweroff>
	I1006 15:00:32.496541  800014 main.go:141] libmachine: (newest-cni-320304) DBG |   <on_reboot>restart</on_reboot>
	I1006 15:00:32.496553  800014 main.go:141] libmachine: (newest-cni-320304) DBG |   <on_crash>destroy</on_crash>
	I1006 15:00:32.496564  800014 main.go:141] libmachine: (newest-cni-320304) DBG |   <devices>
	I1006 15:00:32.496575  800014 main.go:141] libmachine: (newest-cni-320304) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1006 15:00:32.496595  800014 main.go:141] libmachine: (newest-cni-320304) DBG |     <disk type='file' device='cdrom'>
	I1006 15:00:32.496630  800014 main.go:141] libmachine: (newest-cni-320304) DBG |       <driver name='qemu' type='raw'/>
	I1006 15:00:32.496663  800014 main.go:141] libmachine: (newest-cni-320304) DBG |       <source file='/home/jenkins/minikube-integration/21701-739942/.minikube/machines/newest-cni-320304/boot2docker.iso'/>
	I1006 15:00:32.496677  800014 main.go:141] libmachine: (newest-cni-320304) DBG |       <target dev='hdc' bus='scsi'/>
	I1006 15:00:32.496685  800014 main.go:141] libmachine: (newest-cni-320304) DBG |       <readonly/>
	I1006 15:00:32.496697  800014 main.go:141] libmachine: (newest-cni-320304) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1006 15:00:32.496708  800014 main.go:141] libmachine: (newest-cni-320304) DBG |     </disk>
	I1006 15:00:32.496717  800014 main.go:141] libmachine: (newest-cni-320304) DBG |     <disk type='file' device='disk'>
	I1006 15:00:32.496729  800014 main.go:141] libmachine: (newest-cni-320304) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1006 15:00:32.496768  800014 main.go:141] libmachine: (newest-cni-320304) DBG |       <source file='/home/jenkins/minikube-integration/21701-739942/.minikube/machines/newest-cni-320304/newest-cni-320304.rawdisk'/>
	I1006 15:00:32.496790  800014 main.go:141] libmachine: (newest-cni-320304) DBG |       <target dev='hda' bus='virtio'/>
	I1006 15:00:32.496820  800014 main.go:141] libmachine: (newest-cni-320304) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1006 15:00:32.496831  800014 main.go:141] libmachine: (newest-cni-320304) DBG |     </disk>
	I1006 15:00:32.496845  800014 main.go:141] libmachine: (newest-cni-320304) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1006 15:00:32.496862  800014 main.go:141] libmachine: (newest-cni-320304) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1006 15:00:32.496916  800014 main.go:141] libmachine: (newest-cni-320304) DBG |     </controller>
	I1006 15:00:32.496947  800014 main.go:141] libmachine: (newest-cni-320304) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1006 15:00:32.496964  800014 main.go:141] libmachine: (newest-cni-320304) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1006 15:00:32.496978  800014 main.go:141] libmachine: (newest-cni-320304) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1006 15:00:32.496991  800014 main.go:141] libmachine: (newest-cni-320304) DBG |     </controller>
	I1006 15:00:32.497003  800014 main.go:141] libmachine: (newest-cni-320304) DBG |     <interface type='network'>
	I1006 15:00:32.497016  800014 main.go:141] libmachine: (newest-cni-320304) DBG |       <mac address='52:54:00:e9:18:22'/>
	I1006 15:00:32.497037  800014 main.go:141] libmachine: (newest-cni-320304) DBG |       <source network='mk-newest-cni-320304'/>
	I1006 15:00:32.497055  800014 main.go:141] libmachine: (newest-cni-320304) DBG |       <model type='virtio'/>
	I1006 15:00:32.497067  800014 main.go:141] libmachine: (newest-cni-320304) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1006 15:00:32.497074  800014 main.go:141] libmachine: (newest-cni-320304) DBG |     </interface>
	I1006 15:00:32.497083  800014 main.go:141] libmachine: (newest-cni-320304) DBG |     <interface type='network'>
	I1006 15:00:32.497090  800014 main.go:141] libmachine: (newest-cni-320304) DBG |       <mac address='52:54:00:48:00:0e'/>
	I1006 15:00:32.497096  800014 main.go:141] libmachine: (newest-cni-320304) DBG |       <source network='default'/>
	I1006 15:00:32.497103  800014 main.go:141] libmachine: (newest-cni-320304) DBG |       <model type='virtio'/>
	I1006 15:00:32.497114  800014 main.go:141] libmachine: (newest-cni-320304) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1006 15:00:32.497125  800014 main.go:141] libmachine: (newest-cni-320304) DBG |     </interface>
	I1006 15:00:32.497137  800014 main.go:141] libmachine: (newest-cni-320304) DBG |     <serial type='pty'>
	I1006 15:00:32.497158  800014 main.go:141] libmachine: (newest-cni-320304) DBG |       <target type='isa-serial' port='0'>
	I1006 15:00:32.497193  800014 main.go:141] libmachine: (newest-cni-320304) DBG |         <model name='isa-serial'/>
	I1006 15:00:32.497214  800014 main.go:141] libmachine: (newest-cni-320304) DBG |       </target>
	I1006 15:00:32.497224  800014 main.go:141] libmachine: (newest-cni-320304) DBG |     </serial>
	I1006 15:00:32.497235  800014 main.go:141] libmachine: (newest-cni-320304) DBG |     <console type='pty'>
	I1006 15:00:32.497244  800014 main.go:141] libmachine: (newest-cni-320304) DBG |       <target type='serial' port='0'/>
	I1006 15:00:32.497254  800014 main.go:141] libmachine: (newest-cni-320304) DBG |     </console>
	I1006 15:00:32.497263  800014 main.go:141] libmachine: (newest-cni-320304) DBG |     <input type='mouse' bus='ps2'/>
	I1006 15:00:32.497273  800014 main.go:141] libmachine: (newest-cni-320304) DBG |     <input type='keyboard' bus='ps2'/>
	I1006 15:00:32.497296  800014 main.go:141] libmachine: (newest-cni-320304) DBG |     <audio id='1' type='none'/>
	I1006 15:00:32.497312  800014 main.go:141] libmachine: (newest-cni-320304) DBG |     <memballoon model='virtio'>
	I1006 15:00:32.497346  800014 main.go:141] libmachine: (newest-cni-320304) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1006 15:00:32.497370  800014 main.go:141] libmachine: (newest-cni-320304) DBG |     </memballoon>
	I1006 15:00:32.497384  800014 main.go:141] libmachine: (newest-cni-320304) DBG |     <rng model='virtio'>
	I1006 15:00:32.497397  800014 main.go:141] libmachine: (newest-cni-320304) DBG |       <backend model='random'>/dev/random</backend>
	I1006 15:00:32.497412  800014 main.go:141] libmachine: (newest-cni-320304) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1006 15:00:32.497422  800014 main.go:141] libmachine: (newest-cni-320304) DBG |     </rng>
	I1006 15:00:32.497435  800014 main.go:141] libmachine: (newest-cni-320304) DBG |   </devices>
	I1006 15:00:32.497449  800014 main.go:141] libmachine: (newest-cni-320304) DBG | </domain>
	I1006 15:00:32.497465  800014 main.go:141] libmachine: (newest-cni-320304) DBG | 
	I1006 15:00:32.907365  800014 main.go:141] libmachine: (newest-cni-320304) waiting for domain to start...
	I1006 15:00:32.909089  800014 main.go:141] libmachine: (newest-cni-320304) domain is now running
	I1006 15:00:32.909118  800014 main.go:141] libmachine: (newest-cni-320304) waiting for IP...
	I1006 15:00:32.910216  800014 main.go:141] libmachine: (newest-cni-320304) DBG | domain newest-cni-320304 has defined MAC address 52:54:00:e9:18:22 in network mk-newest-cni-320304
	I1006 15:00:32.911079  800014 main.go:141] libmachine: (newest-cni-320304) found domain IP: 192.168.50.239
	I1006 15:00:32.911108  800014 main.go:141] libmachine: (newest-cni-320304) DBG | domain newest-cni-320304 has current primary IP address 192.168.50.239 and MAC address 52:54:00:e9:18:22 in network mk-newest-cni-320304
	I1006 15:00:32.911116  800014 main.go:141] libmachine: (newest-cni-320304) reserving static IP address...
	I1006 15:00:32.911659  800014 main.go:141] libmachine: (newest-cni-320304) DBG | found host DHCP lease matching {name: "newest-cni-320304", mac: "52:54:00:e9:18:22", ip: "192.168.50.239"} in network mk-newest-cni-320304: {Iface:virbr2 ExpiryTime:2025-10-06 15:58:37 +0000 UTC Type:0 Mac:52:54:00:e9:18:22 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:newest-cni-320304 Clientid:01:52:54:00:e9:18:22}
	I1006 15:00:32.911701  800014 main.go:141] libmachine: (newest-cni-320304) reserved static IP address 192.168.50.239 for domain newest-cni-320304
	I1006 15:00:32.911724  800014 main.go:141] libmachine: (newest-cni-320304) DBG | skip adding static IP to network mk-newest-cni-320304 - found existing host DHCP lease matching {name: "newest-cni-320304", mac: "52:54:00:e9:18:22", ip: "192.168.50.239"}
	I1006 15:00:32.911736  800014 main.go:141] libmachine: (newest-cni-320304) waiting for SSH...
	I1006 15:00:32.911745  800014 main.go:141] libmachine: (newest-cni-320304) DBG | Getting to WaitForSSH function...
	I1006 15:00:32.914394  800014 main.go:141] libmachine: (newest-cni-320304) DBG | domain newest-cni-320304 has defined MAC address 52:54:00:e9:18:22 in network mk-newest-cni-320304
	I1006 15:00:32.914936  800014 main.go:141] libmachine: (newest-cni-320304) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:18:22", ip: ""} in network mk-newest-cni-320304: {Iface:virbr2 ExpiryTime:2025-10-06 15:58:37 +0000 UTC Type:0 Mac:52:54:00:e9:18:22 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:newest-cni-320304 Clientid:01:52:54:00:e9:18:22}
	I1006 15:00:32.914954  800014 main.go:141] libmachine: (newest-cni-320304) DBG | domain newest-cni-320304 has defined IP address 192.168.50.239 and MAC address 52:54:00:e9:18:22 in network mk-newest-cni-320304
	I1006 15:00:32.915171  800014 main.go:141] libmachine: (newest-cni-320304) DBG | Using SSH client type: external
	I1006 15:00:32.915236  800014 main.go:141] libmachine: (newest-cni-320304) DBG | Using SSH private key: /home/jenkins/minikube-integration/21701-739942/.minikube/machines/newest-cni-320304/id_rsa (-rw-------)
	I1006 15:00:32.915302  800014 main.go:141] libmachine: (newest-cni-320304) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.239 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21701-739942/.minikube/machines/newest-cni-320304/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1006 15:00:32.915323  800014 main.go:141] libmachine: (newest-cni-320304) DBG | About to run SSH command:
	I1006 15:00:32.915336  800014 main.go:141] libmachine: (newest-cni-320304) DBG | exit 0
	I1006 15:00:44.211185  800014 main.go:141] libmachine: (newest-cni-320304) DBG | SSH cmd err, output: exit status 255: 
	I1006 15:00:44.211233  800014 main.go:141] libmachine: (newest-cni-320304) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1006 15:00:44.211242  800014 main.go:141] libmachine: (newest-cni-320304) DBG | command : exit 0
	I1006 15:00:44.211247  800014 main.go:141] libmachine: (newest-cni-320304) DBG | err     : exit status 255
	I1006 15:00:44.211256  800014 main.go:141] libmachine: (newest-cni-320304) DBG | output  : 
	I1006 15:00:47.213416  800014 main.go:141] libmachine: (newest-cni-320304) DBG | Getting to WaitForSSH function...
	I1006 15:00:47.216700  800014 main.go:141] libmachine: (newest-cni-320304) DBG | domain newest-cni-320304 has defined MAC address 52:54:00:e9:18:22 in network mk-newest-cni-320304
	I1006 15:00:47.217192  800014 main.go:141] libmachine: (newest-cni-320304) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:18:22", ip: ""} in network mk-newest-cni-320304: {Iface:virbr2 ExpiryTime:2025-10-06 16:00:43 +0000 UTC Type:0 Mac:52:54:00:e9:18:22 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:newest-cni-320304 Clientid:01:52:54:00:e9:18:22}
	I1006 15:00:47.217212  800014 main.go:141] libmachine: (newest-cni-320304) DBG | domain newest-cni-320304 has defined IP address 192.168.50.239 and MAC address 52:54:00:e9:18:22 in network mk-newest-cni-320304
	I1006 15:00:47.217568  800014 main.go:141] libmachine: (newest-cni-320304) DBG | Using SSH client type: external
	I1006 15:00:47.217611  800014 main.go:141] libmachine: (newest-cni-320304) DBG | Using SSH private key: /home/jenkins/minikube-integration/21701-739942/.minikube/machines/newest-cni-320304/id_rsa (-rw-------)
	I1006 15:00:47.217649  800014 main.go:141] libmachine: (newest-cni-320304) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.239 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21701-739942/.minikube/machines/newest-cni-320304/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1006 15:00:47.217659  800014 main.go:141] libmachine: (newest-cni-320304) DBG | About to run SSH command:
	I1006 15:00:47.217678  800014 main.go:141] libmachine: (newest-cni-320304) DBG | exit 0
	I1006 15:00:47.353928  800014 main.go:141] libmachine: (newest-cni-320304) DBG | SSH cmd err, output: <nil>: 
	I1006 15:00:47.354445  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetConfigRaw
	I1006 15:00:47.355146  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetIP
	I1006 15:00:47.357601  800014 main.go:141] libmachine: (newest-cni-320304) DBG | domain newest-cni-320304 has defined MAC address 52:54:00:e9:18:22 in network mk-newest-cni-320304
	I1006 15:00:47.358037  800014 main.go:141] libmachine: (newest-cni-320304) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:18:22", ip: ""} in network mk-newest-cni-320304: {Iface:virbr2 ExpiryTime:2025-10-06 16:00:43 +0000 UTC Type:0 Mac:52:54:00:e9:18:22 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:newest-cni-320304 Clientid:01:52:54:00:e9:18:22}
	I1006 15:00:47.358068  800014 main.go:141] libmachine: (newest-cni-320304) DBG | domain newest-cni-320304 has defined IP address 192.168.50.239 and MAC address 52:54:00:e9:18:22 in network mk-newest-cni-320304
	I1006 15:00:47.358325  800014 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/newest-cni-320304/config.json ...
	I1006 15:00:47.358557  800014 machine.go:93] provisionDockerMachine start ...
	I1006 15:00:47.358581  800014 main.go:141] libmachine: (newest-cni-320304) Calling .DriverName
	I1006 15:00:47.358832  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetSSHHostname
	I1006 15:00:47.361101  800014 main.go:141] libmachine: (newest-cni-320304) DBG | domain newest-cni-320304 has defined MAC address 52:54:00:e9:18:22 in network mk-newest-cni-320304
	I1006 15:00:47.361502  800014 main.go:141] libmachine: (newest-cni-320304) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:18:22", ip: ""} in network mk-newest-cni-320304: {Iface:virbr2 ExpiryTime:2025-10-06 16:00:43 +0000 UTC Type:0 Mac:52:54:00:e9:18:22 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:newest-cni-320304 Clientid:01:52:54:00:e9:18:22}
	I1006 15:00:47.361521  800014 main.go:141] libmachine: (newest-cni-320304) DBG | domain newest-cni-320304 has defined IP address 192.168.50.239 and MAC address 52:54:00:e9:18:22 in network mk-newest-cni-320304
	I1006 15:00:47.361737  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetSSHPort
	I1006 15:00:47.361943  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetSSHKeyPath
	I1006 15:00:47.362109  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetSSHKeyPath
	I1006 15:00:47.362344  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetSSHUsername
	I1006 15:00:47.362531  800014 main.go:141] libmachine: Using SSH client type: native
	I1006 15:00:47.362804  800014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.239 22 <nil> <nil>}
	I1006 15:00:47.362821  800014 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 15:00:47.476640  800014 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1006 15:00:47.476677  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetMachineName
	I1006 15:00:47.476968  800014 buildroot.go:166] provisioning hostname "newest-cni-320304"
	I1006 15:00:47.476992  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetMachineName
	I1006 15:00:47.477282  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetSSHHostname
	I1006 15:00:47.480440  800014 main.go:141] libmachine: (newest-cni-320304) DBG | domain newest-cni-320304 has defined MAC address 52:54:00:e9:18:22 in network mk-newest-cni-320304
	I1006 15:00:47.480886  800014 main.go:141] libmachine: (newest-cni-320304) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:18:22", ip: ""} in network mk-newest-cni-320304: {Iface:virbr2 ExpiryTime:2025-10-06 16:00:43 +0000 UTC Type:0 Mac:52:54:00:e9:18:22 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:newest-cni-320304 Clientid:01:52:54:00:e9:18:22}
	I1006 15:00:47.480909  800014 main.go:141] libmachine: (newest-cni-320304) DBG | domain newest-cni-320304 has defined IP address 192.168.50.239 and MAC address 52:54:00:e9:18:22 in network mk-newest-cni-320304
	I1006 15:00:47.481099  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetSSHPort
	I1006 15:00:47.481284  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetSSHKeyPath
	I1006 15:00:47.481478  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetSSHKeyPath
	I1006 15:00:47.481691  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetSSHUsername
	I1006 15:00:47.481892  800014 main.go:141] libmachine: Using SSH client type: native
	I1006 15:00:47.482214  800014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.239 22 <nil> <nil>}
	I1006 15:00:47.482233  800014 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-320304 && echo "newest-cni-320304" | sudo tee /etc/hostname
	I1006 15:00:47.613043  800014 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-320304
	
	I1006 15:00:47.613080  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetSSHHostname
	I1006 15:00:47.616494  800014 main.go:141] libmachine: (newest-cni-320304) DBG | domain newest-cni-320304 has defined MAC address 52:54:00:e9:18:22 in network mk-newest-cni-320304
	I1006 15:00:47.616941  800014 main.go:141] libmachine: (newest-cni-320304) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:18:22", ip: ""} in network mk-newest-cni-320304: {Iface:virbr2 ExpiryTime:2025-10-06 16:00:43 +0000 UTC Type:0 Mac:52:54:00:e9:18:22 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:newest-cni-320304 Clientid:01:52:54:00:e9:18:22}
	I1006 15:00:47.616973  800014 main.go:141] libmachine: (newest-cni-320304) DBG | domain newest-cni-320304 has defined IP address 192.168.50.239 and MAC address 52:54:00:e9:18:22 in network mk-newest-cni-320304
	I1006 15:00:47.617207  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetSSHPort
	I1006 15:00:47.617431  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetSSHKeyPath
	I1006 15:00:47.617654  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetSSHKeyPath
	I1006 15:00:47.617819  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetSSHUsername
	I1006 15:00:47.618011  800014 main.go:141] libmachine: Using SSH client type: native
	I1006 15:00:47.618328  800014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.239 22 <nil> <nil>}
	I1006 15:00:47.618349  800014 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-320304' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-320304/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-320304' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 15:00:47.741990  800014 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 15:00:47.742024  800014 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21701-739942/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-739942/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-739942/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-739942/.minikube}
	I1006 15:00:47.742061  800014 buildroot.go:174] setting up certificates
	I1006 15:00:47.742073  800014 provision.go:84] configureAuth start
	I1006 15:00:47.742083  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetMachineName
	I1006 15:00:47.742513  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetIP
	I1006 15:00:47.745633  800014 main.go:141] libmachine: (newest-cni-320304) DBG | domain newest-cni-320304 has defined MAC address 52:54:00:e9:18:22 in network mk-newest-cni-320304
	I1006 15:00:47.746115  800014 main.go:141] libmachine: (newest-cni-320304) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:18:22", ip: ""} in network mk-newest-cni-320304: {Iface:virbr2 ExpiryTime:2025-10-06 16:00:43 +0000 UTC Type:0 Mac:52:54:00:e9:18:22 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:newest-cni-320304 Clientid:01:52:54:00:e9:18:22}
	I1006 15:00:47.746144  800014 main.go:141] libmachine: (newest-cni-320304) DBG | domain newest-cni-320304 has defined IP address 192.168.50.239 and MAC address 52:54:00:e9:18:22 in network mk-newest-cni-320304
	I1006 15:00:47.746352  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetSSHHostname
	I1006 15:00:47.748972  800014 main.go:141] libmachine: (newest-cni-320304) DBG | domain newest-cni-320304 has defined MAC address 52:54:00:e9:18:22 in network mk-newest-cni-320304
	I1006 15:00:47.749291  800014 main.go:141] libmachine: (newest-cni-320304) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:18:22", ip: ""} in network mk-newest-cni-320304: {Iface:virbr2 ExpiryTime:2025-10-06 16:00:43 +0000 UTC Type:0 Mac:52:54:00:e9:18:22 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:newest-cni-320304 Clientid:01:52:54:00:e9:18:22}
	I1006 15:00:47.749313  800014 main.go:141] libmachine: (newest-cni-320304) DBG | domain newest-cni-320304 has defined IP address 192.168.50.239 and MAC address 52:54:00:e9:18:22 in network mk-newest-cni-320304
	I1006 15:00:47.749540  800014 provision.go:143] copyHostCerts
	I1006 15:00:47.749617  800014 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-739942/.minikube/ca.pem, removing ...
	I1006 15:00:47.749640  800014 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-739942/.minikube/ca.pem
	I1006 15:00:47.749730  800014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-739942/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-739942/.minikube/ca.pem (1078 bytes)
	I1006 15:00:47.749946  800014 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-739942/.minikube/cert.pem, removing ...
	I1006 15:00:47.749960  800014 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-739942/.minikube/cert.pem
	I1006 15:00:47.750026  800014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-739942/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-739942/.minikube/cert.pem (1123 bytes)
	I1006 15:00:47.750125  800014 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-739942/.minikube/key.pem, removing ...
	I1006 15:00:47.750136  800014 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-739942/.minikube/key.pem
	I1006 15:00:47.750175  800014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-739942/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-739942/.minikube/key.pem (1679 bytes)
	I1006 15:00:47.750265  800014 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-739942/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-739942/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-739942/.minikube/certs/ca-key.pem org=jenkins.newest-cni-320304 san=[127.0.0.1 192.168.50.239 localhost minikube newest-cni-320304]
	I1006 15:00:47.903896  800014 provision.go:177] copyRemoteCerts
	I1006 15:00:47.903971  800014 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 15:00:47.904012  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetSSHHostname
	I1006 15:00:47.907142  800014 main.go:141] libmachine: (newest-cni-320304) DBG | domain newest-cni-320304 has defined MAC address 52:54:00:e9:18:22 in network mk-newest-cni-320304
	I1006 15:00:47.907510  800014 main.go:141] libmachine: (newest-cni-320304) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:18:22", ip: ""} in network mk-newest-cni-320304: {Iface:virbr2 ExpiryTime:2025-10-06 16:00:43 +0000 UTC Type:0 Mac:52:54:00:e9:18:22 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:newest-cni-320304 Clientid:01:52:54:00:e9:18:22}
	I1006 15:00:47.907538  800014 main.go:141] libmachine: (newest-cni-320304) DBG | domain newest-cni-320304 has defined IP address 192.168.50.239 and MAC address 52:54:00:e9:18:22 in network mk-newest-cni-320304
	I1006 15:00:47.907760  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetSSHPort
	I1006 15:00:47.907987  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetSSHKeyPath
	I1006 15:00:47.908175  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetSSHUsername
	I1006 15:00:47.908318  800014 sshutil.go:53] new ssh client: &{IP:192.168.50.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/newest-cni-320304/id_rsa Username:docker}
	I1006 15:00:47.997185  800014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1006 15:00:48.028673  800014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1006 15:00:48.060617  800014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1006 15:00:48.094554  800014 provision.go:87] duration metric: took 352.453245ms to configureAuth
	I1006 15:00:48.094583  800014 buildroot.go:189] setting minikube options for container-runtime
	I1006 15:00:48.094833  800014 config.go:182] Loaded profile config "newest-cni-320304": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 15:00:48.094965  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetSSHHostname
	I1006 15:00:48.098277  800014 main.go:141] libmachine: (newest-cni-320304) DBG | domain newest-cni-320304 has defined MAC address 52:54:00:e9:18:22 in network mk-newest-cni-320304
	I1006 15:00:48.098653  800014 main.go:141] libmachine: (newest-cni-320304) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:18:22", ip: ""} in network mk-newest-cni-320304: {Iface:virbr2 ExpiryTime:2025-10-06 16:00:43 +0000 UTC Type:0 Mac:52:54:00:e9:18:22 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:newest-cni-320304 Clientid:01:52:54:00:e9:18:22}
	I1006 15:00:48.098686  800014 main.go:141] libmachine: (newest-cni-320304) DBG | domain newest-cni-320304 has defined IP address 192.168.50.239 and MAC address 52:54:00:e9:18:22 in network mk-newest-cni-320304
	I1006 15:00:48.098886  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetSSHPort
	I1006 15:00:48.099135  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetSSHKeyPath
	I1006 15:00:48.099334  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetSSHKeyPath
	I1006 15:00:48.099474  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetSSHUsername
	I1006 15:00:48.099641  800014 main.go:141] libmachine: Using SSH client type: native
	I1006 15:00:48.099856  800014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.239 22 <nil> <nil>}
	I1006 15:00:48.099874  800014 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 15:00:48.355683  800014 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 15:00:48.355718  800014 machine.go:96] duration metric: took 997.144164ms to provisionDockerMachine
	I1006 15:00:48.355733  800014 start.go:293] postStartSetup for "newest-cni-320304" (driver="kvm2")
	I1006 15:00:48.355745  800014 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 15:00:48.355778  800014 main.go:141] libmachine: (newest-cni-320304) Calling .DriverName
	I1006 15:00:48.356143  800014 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 15:00:48.356175  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetSSHHostname
	I1006 15:00:48.359228  800014 main.go:141] libmachine: (newest-cni-320304) DBG | domain newest-cni-320304 has defined MAC address 52:54:00:e9:18:22 in network mk-newest-cni-320304
	I1006 15:00:48.359710  800014 main.go:141] libmachine: (newest-cni-320304) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:18:22", ip: ""} in network mk-newest-cni-320304: {Iface:virbr2 ExpiryTime:2025-10-06 16:00:43 +0000 UTC Type:0 Mac:52:54:00:e9:18:22 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:newest-cni-320304 Clientid:01:52:54:00:e9:18:22}
	I1006 15:00:48.359743  800014 main.go:141] libmachine: (newest-cni-320304) DBG | domain newest-cni-320304 has defined IP address 192.168.50.239 and MAC address 52:54:00:e9:18:22 in network mk-newest-cni-320304
	I1006 15:00:48.359902  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetSSHPort
	I1006 15:00:48.360112  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetSSHKeyPath
	I1006 15:00:48.360303  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetSSHUsername
	I1006 15:00:48.360462  800014 sshutil.go:53] new ssh client: &{IP:192.168.50.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/newest-cni-320304/id_rsa Username:docker}
	I1006 15:00:48.449254  800014 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 15:00:48.454805  800014 info.go:137] Remote host: Buildroot 2025.02
	I1006 15:00:48.454844  800014 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-739942/.minikube/addons for local assets ...
	I1006 15:00:48.454928  800014 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-739942/.minikube/files for local assets ...
	I1006 15:00:48.455041  800014 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-739942/.minikube/files/etc/ssl/certs/7438512.pem -> 7438512.pem in /etc/ssl/certs
	I1006 15:00:48.455164  800014 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1006 15:00:48.467372  800014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/files/etc/ssl/certs/7438512.pem --> /etc/ssl/certs/7438512.pem (1708 bytes)
	I1006 15:00:48.498819  800014 start.go:296] duration metric: took 143.069525ms for postStartSetup
	I1006 15:00:48.498865  800014 fix.go:56] duration metric: took 16.028379009s for fixHost
	I1006 15:00:48.498886  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetSSHHostname
	I1006 15:00:48.501916  800014 main.go:141] libmachine: (newest-cni-320304) DBG | domain newest-cni-320304 has defined MAC address 52:54:00:e9:18:22 in network mk-newest-cni-320304
	I1006 15:00:48.502347  800014 main.go:141] libmachine: (newest-cni-320304) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:18:22", ip: ""} in network mk-newest-cni-320304: {Iface:virbr2 ExpiryTime:2025-10-06 16:00:43 +0000 UTC Type:0 Mac:52:54:00:e9:18:22 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:newest-cni-320304 Clientid:01:52:54:00:e9:18:22}
	I1006 15:00:48.502380  800014 main.go:141] libmachine: (newest-cni-320304) DBG | domain newest-cni-320304 has defined IP address 192.168.50.239 and MAC address 52:54:00:e9:18:22 in network mk-newest-cni-320304
	I1006 15:00:48.502602  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetSSHPort
	I1006 15:00:48.502835  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetSSHKeyPath
	I1006 15:00:48.503046  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetSSHKeyPath
	I1006 15:00:48.503254  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetSSHUsername
	I1006 15:00:48.503513  800014 main.go:141] libmachine: Using SSH client type: native
	I1006 15:00:48.503779  800014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.239 22 <nil> <nil>}
	I1006 15:00:48.503792  800014 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1006 15:00:48.616652  800014 main.go:141] libmachine: SSH cmd err, output: <nil>: 1759762848.578520016
	
	I1006 15:00:48.616676  800014 fix.go:216] guest clock: 1759762848.578520016
	I1006 15:00:48.616685  800014 fix.go:229] Guest: 2025-10-06 15:00:48.578520016 +0000 UTC Remote: 2025-10-06 15:00:48.498868611 +0000 UTC m=+16.189849541 (delta=79.651405ms)
	I1006 15:00:48.616713  800014 fix.go:200] guest clock delta is within tolerance: 79.651405ms
	I1006 15:00:48.616719  800014 start.go:83] releasing machines lock for "newest-cni-320304", held for 16.146244701s
	I1006 15:00:48.616745  800014 main.go:141] libmachine: (newest-cni-320304) Calling .DriverName
	I1006 15:00:48.617032  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetIP
	I1006 15:00:48.620221  800014 main.go:141] libmachine: (newest-cni-320304) DBG | domain newest-cni-320304 has defined MAC address 52:54:00:e9:18:22 in network mk-newest-cni-320304
	I1006 15:00:48.620650  800014 main.go:141] libmachine: (newest-cni-320304) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:18:22", ip: ""} in network mk-newest-cni-320304: {Iface:virbr2 ExpiryTime:2025-10-06 16:00:43 +0000 UTC Type:0 Mac:52:54:00:e9:18:22 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:newest-cni-320304 Clientid:01:52:54:00:e9:18:22}
	I1006 15:00:48.620701  800014 main.go:141] libmachine: (newest-cni-320304) DBG | domain newest-cni-320304 has defined IP address 192.168.50.239 and MAC address 52:54:00:e9:18:22 in network mk-newest-cni-320304
	I1006 15:00:48.620868  800014 main.go:141] libmachine: (newest-cni-320304) Calling .DriverName
	I1006 15:00:48.621396  800014 main.go:141] libmachine: (newest-cni-320304) Calling .DriverName
	I1006 15:00:48.621657  800014 main.go:141] libmachine: (newest-cni-320304) Calling .DriverName
	I1006 15:00:48.621757  800014 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 15:00:48.621808  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetSSHHostname
	I1006 15:00:48.621924  800014 ssh_runner.go:195] Run: cat /version.json
	I1006 15:00:48.621949  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetSSHHostname
	I1006 15:00:48.625109  800014 main.go:141] libmachine: (newest-cni-320304) DBG | domain newest-cni-320304 has defined MAC address 52:54:00:e9:18:22 in network mk-newest-cni-320304
	I1006 15:00:48.625452  800014 main.go:141] libmachine: (newest-cni-320304) DBG | domain newest-cni-320304 has defined MAC address 52:54:00:e9:18:22 in network mk-newest-cni-320304
	I1006 15:00:48.625554  800014 main.go:141] libmachine: (newest-cni-320304) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:18:22", ip: ""} in network mk-newest-cni-320304: {Iface:virbr2 ExpiryTime:2025-10-06 16:00:43 +0000 UTC Type:0 Mac:52:54:00:e9:18:22 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:newest-cni-320304 Clientid:01:52:54:00:e9:18:22}
	I1006 15:00:48.625607  800014 main.go:141] libmachine: (newest-cni-320304) DBG | domain newest-cni-320304 has defined IP address 192.168.50.239 and MAC address 52:54:00:e9:18:22 in network mk-newest-cni-320304
	I1006 15:00:48.625786  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetSSHPort
	I1006 15:00:48.625968  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetSSHKeyPath
	I1006 15:00:48.626159  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetSSHUsername
	I1006 15:00:48.626162  800014 main.go:141] libmachine: (newest-cni-320304) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:18:22", ip: ""} in network mk-newest-cni-320304: {Iface:virbr2 ExpiryTime:2025-10-06 16:00:43 +0000 UTC Type:0 Mac:52:54:00:e9:18:22 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:newest-cni-320304 Clientid:01:52:54:00:e9:18:22}
	I1006 15:00:48.626189  800014 main.go:141] libmachine: (newest-cni-320304) DBG | domain newest-cni-320304 has defined IP address 192.168.50.239 and MAC address 52:54:00:e9:18:22 in network mk-newest-cni-320304
	I1006 15:00:48.626365  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetSSHPort
	I1006 15:00:48.626382  800014 sshutil.go:53] new ssh client: &{IP:192.168.50.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/newest-cni-320304/id_rsa Username:docker}
	I1006 15:00:48.626530  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetSSHKeyPath
	I1006 15:00:48.626710  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetSSHUsername
	I1006 15:00:48.626873  800014 sshutil.go:53] new ssh client: &{IP:192.168.50.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/newest-cni-320304/id_rsa Username:docker}
	I1006 15:00:48.712899  800014 ssh_runner.go:195] Run: systemctl --version
	I1006 15:00:48.735924  800014 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 15:00:48.884313  800014 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 15:00:48.891348  800014 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 15:00:48.891533  800014 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 15:00:48.914194  800014 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1006 15:00:48.914220  800014 start.go:495] detecting cgroup driver to use...
	I1006 15:00:48.914302  800014 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 15:00:48.935241  800014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 15:00:48.954145  800014 docker.go:218] disabling cri-docker service (if available) ...
	I1006 15:00:48.954211  800014 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 15:00:48.972673  800014 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 15:00:48.989994  800014 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 15:00:49.139731  800014 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 15:00:49.355821  800014 docker.go:234] disabling docker service ...
	I1006 15:00:49.355889  800014 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 15:00:49.373852  800014 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 15:00:49.390519  800014 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 15:00:49.554918  800014 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 15:00:49.707323  800014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 15:00:49.727791  800014 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 15:00:49.753523  800014 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1006 15:00:49.753602  800014 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 15:00:49.767173  800014 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1006 15:00:49.767270  800014 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 15:00:49.781271  800014 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 15:00:49.795859  800014 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 15:00:49.810165  800014 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 15:00:49.825976  800014 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 15:00:49.840495  800014 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 15:00:49.862888  800014 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 15:00:49.877119  800014 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 15:00:49.889285  800014 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1006 15:00:49.889361  800014 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1006 15:00:49.910963  800014 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 15:00:49.923870  800014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 15:00:50.066094  800014 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 15:00:50.194514  800014 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 15:00:50.194625  800014 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 15:00:50.200782  800014 start.go:563] Will wait 60s for crictl version
	I1006 15:00:50.200845  800014 ssh_runner.go:195] Run: which crictl
	I1006 15:00:50.205583  800014 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1006 15:00:50.253992  800014 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1006 15:00:50.254102  800014 ssh_runner.go:195] Run: crio --version
	I1006 15:00:50.287282  800014 ssh_runner.go:195] Run: crio --version
	I1006 15:00:50.321943  800014 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1006 15:00:50.323117  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetIP
	I1006 15:00:50.326477  800014 main.go:141] libmachine: (newest-cni-320304) DBG | domain newest-cni-320304 has defined MAC address 52:54:00:e9:18:22 in network mk-newest-cni-320304
	I1006 15:00:50.326865  800014 main.go:141] libmachine: (newest-cni-320304) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:18:22", ip: ""} in network mk-newest-cni-320304: {Iface:virbr2 ExpiryTime:2025-10-06 16:00:43 +0000 UTC Type:0 Mac:52:54:00:e9:18:22 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:newest-cni-320304 Clientid:01:52:54:00:e9:18:22}
	I1006 15:00:50.326894  800014 main.go:141] libmachine: (newest-cni-320304) DBG | domain newest-cni-320304 has defined IP address 192.168.50.239 and MAC address 52:54:00:e9:18:22 in network mk-newest-cni-320304
	I1006 15:00:50.327226  800014 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1006 15:00:50.332239  800014 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 15:00:50.350299  800014 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1006 15:00:50.351687  800014 kubeadm.go:883] updating cluster {Name:newest-cni-320304 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.34.1 ClusterName:newest-cni-320304 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.239 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<
nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 15:00:50.351864  800014 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 15:00:50.351960  800014 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 15:00:50.395645  800014 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1006 15:00:50.395736  800014 ssh_runner.go:195] Run: which lz4
	I1006 15:00:50.400489  800014 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1006 15:00:50.405880  800014 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1006 15:00:50.405924  800014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1006 15:00:52.131019  800014 crio.go:462] duration metric: took 1.730567699s to copy over tarball
	I1006 15:00:52.131120  800014 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1006 15:00:53.814908  800014 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.683748124s)
	I1006 15:00:53.814974  800014 crio.go:469] duration metric: took 1.683891108s to extract the tarball
	I1006 15:00:53.814990  800014 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1006 15:00:53.860075  800014 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 15:00:53.907299  800014 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 15:00:53.907333  800014 cache_images.go:85] Images are preloaded, skipping loading
	I1006 15:00:53.907345  800014 kubeadm.go:934] updating node { 192.168.50.239 8443 v1.34.1 crio true true} ...
	I1006 15:00:53.907475  800014 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-320304 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.239
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-320304 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 15:00:53.907547  800014 ssh_runner.go:195] Run: crio config
	I1006 15:00:53.954484  800014 cni.go:84] Creating CNI manager for ""
	I1006 15:00:53.954522  800014 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1006 15:00:53.954554  800014 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I1006 15:00:53.954601  800014 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.50.239 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-320304 NodeName:newest-cni-320304 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.239"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.239 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 15:00:53.954739  800014 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.239
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-320304"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.239"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.239"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 15:00:53.954833  800014 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 15:00:53.968655  800014 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 15:00:53.968747  800014 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 15:00:53.981942  800014 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1006 15:00:54.005154  800014 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 15:00:54.028210  800014 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1006 15:00:54.052131  800014 ssh_runner.go:195] Run: grep 192.168.50.239	control-plane.minikube.internal$ /etc/hosts
	I1006 15:00:54.057215  800014 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.239	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 15:00:54.073676  800014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 15:00:54.226518  800014 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 15:00:54.259926  800014 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/newest-cni-320304 for IP: 192.168.50.239
	I1006 15:00:54.259954  800014 certs.go:195] generating shared ca certs ...
	I1006 15:00:54.259978  800014 certs.go:227] acquiring lock for ca certs: {Name:mkac26b60e1fd10143a5d4dc5ca0de64e9dd4f98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 15:00:54.260164  800014 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-739942/.minikube/ca.key
	I1006 15:00:54.260249  800014 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-739942/.minikube/proxy-client-ca.key
	I1006 15:00:54.260266  800014 certs.go:257] generating profile certs ...
	I1006 15:00:54.260357  800014 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/newest-cni-320304/client.key
	I1006 15:00:54.260414  800014 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/newest-cni-320304/apiserver.key.c6664537
	I1006 15:00:54.260448  800014 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/newest-cni-320304/proxy-client.key
	I1006 15:00:54.260573  800014 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-739942/.minikube/certs/743851.pem (1338 bytes)
	W1006 15:00:54.260643  800014 certs.go:480] ignoring /home/jenkins/minikube-integration/21701-739942/.minikube/certs/743851_empty.pem, impossibly tiny 0 bytes
	I1006 15:00:54.260660  800014 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-739942/.minikube/certs/ca-key.pem (1679 bytes)
	I1006 15:00:54.260695  800014 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-739942/.minikube/certs/ca.pem (1078 bytes)
	I1006 15:00:54.260734  800014 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-739942/.minikube/certs/cert.pem (1123 bytes)
	I1006 15:00:54.260766  800014 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-739942/.minikube/certs/key.pem (1679 bytes)
	I1006 15:00:54.260836  800014 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-739942/.minikube/files/etc/ssl/certs/7438512.pem (1708 bytes)
	I1006 15:00:54.261547  800014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 15:00:54.294273  800014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1006 15:00:54.334776  800014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 15:00:54.367791  800014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1006 15:00:54.402418  800014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/newest-cni-320304/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1006 15:00:54.435312  800014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/newest-cni-320304/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1006 15:00:54.469114  800014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/newest-cni-320304/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 15:00:54.501963  800014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/newest-cni-320304/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1006 15:00:54.534357  800014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 15:00:54.567264  800014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/certs/743851.pem --> /usr/share/ca-certificates/743851.pem (1338 bytes)
	I1006 15:00:54.600758  800014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/files/etc/ssl/certs/7438512.pem --> /usr/share/ca-certificates/7438512.pem (1708 bytes)
	I1006 15:00:54.633559  800014 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 15:00:54.657543  800014 ssh_runner.go:195] Run: openssl version
	I1006 15:00:54.665655  800014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/743851.pem && ln -fs /usr/share/ca-certificates/743851.pem /etc/ssl/certs/743851.pem"
	I1006 15:00:54.680343  800014 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/743851.pem
	I1006 15:00:54.686428  800014 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  6 13:59 /usr/share/ca-certificates/743851.pem
	I1006 15:00:54.686492  800014 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/743851.pem
	I1006 15:00:54.694794  800014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/743851.pem /etc/ssl/certs/51391683.0"
	I1006 15:00:54.709756  800014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7438512.pem && ln -fs /usr/share/ca-certificates/7438512.pem /etc/ssl/certs/7438512.pem"
	I1006 15:00:54.724397  800014 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7438512.pem
	I1006 15:00:54.730764  800014 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  6 13:59 /usr/share/ca-certificates/7438512.pem
	I1006 15:00:54.730835  800014 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7438512.pem
	I1006 15:00:54.738890  800014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7438512.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 15:00:54.753694  800014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 15:00:54.768563  800014 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 15:00:54.774561  800014 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 13:50 /usr/share/ca-certificates/minikubeCA.pem
	I1006 15:00:54.774658  800014 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 15:00:54.782934  800014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 15:00:54.797605  800014 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 15:00:54.803544  800014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1006 15:00:54.811612  800014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1006 15:00:54.819651  800014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1006 15:00:54.828009  800014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1006 15:00:54.836461  800014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1006 15:00:54.844763  800014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1006 15:00:54.853167  800014 kubeadm.go:400] StartCluster: {Name:newest-cni-320304 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34
.1 ClusterName:newest-cni-320304 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.239 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil
> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 15:00:54.853326  800014 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 15:00:54.853390  800014 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 15:00:54.898546  800014 cri.go:89] found id: ""
	I1006 15:00:54.898639  800014 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 15:00:54.912291  800014 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1006 15:00:54.912317  800014 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1006 15:00:54.912377  800014 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1006 15:00:54.925537  800014 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1006 15:00:54.926521  800014 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-320304" does not appear in /home/jenkins/minikube-integration/21701-739942/kubeconfig
	I1006 15:00:54.926892  800014 kubeconfig.go:62] /home/jenkins/minikube-integration/21701-739942/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-320304" cluster setting kubeconfig missing "newest-cni-320304" context setting]
	I1006 15:00:54.927474  800014 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-739942/kubeconfig: {Name:mkb3c6455f820b9fd25629981fabc6cb3d63fb4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 15:00:54.929084  800014 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1006 15:00:54.944643  800014 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.50.239
	I1006 15:00:54.944681  800014 kubeadm.go:1160] stopping kube-system containers ...
	I1006 15:00:54.944697  800014 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1006 15:00:54.944759  800014 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 15:00:54.997697  800014 cri.go:89] found id: ""
	I1006 15:00:54.997773  800014 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1006 15:00:55.029415  800014 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 15:00:55.043119  800014 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 15:00:55.043148  800014 kubeadm.go:157] found existing configuration files:
	
	I1006 15:00:55.043206  800014 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1006 15:00:55.055190  800014 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 15:00:55.055252  800014 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 15:00:55.068630  800014 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1006 15:00:55.080841  800014 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 15:00:55.080917  800014 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 15:00:55.094335  800014 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1006 15:00:55.106758  800014 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 15:00:55.106823  800014 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 15:00:55.120023  800014 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1006 15:00:55.132161  800014 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 15:00:55.132246  800014 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 15:00:55.145202  800014 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1006 15:00:55.158527  800014 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1006 15:00:55.231475  800014 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1006 15:00:56.290618  800014 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.059071835s)
	I1006 15:00:56.290718  800014 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1006 15:00:56.561367  800014 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1006 15:00:56.648958  800014 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1006 15:00:56.741688  800014 api_server.go:52] waiting for apiserver process to appear ...
	I1006 15:00:56.741774  800014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 15:00:57.242379  800014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 15:00:57.742784  800014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 15:00:58.242452  800014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 15:00:58.278495  800014 api_server.go:72] duration metric: took 1.536814852s to wait for apiserver process to appear ...
	I1006 15:00:58.278524  800014 api_server.go:88] waiting for apiserver healthz status ...
	I1006 15:00:58.278544  800014 api_server.go:253] Checking apiserver healthz at https://192.168.50.239:8443/healthz ...
	I1006 15:01:00.382942  800014 api_server.go:279] https://192.168.50.239:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1006 15:01:00.382976  800014 api_server.go:103] status: https://192.168.50.239:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1006 15:01:00.382995  800014 api_server.go:253] Checking apiserver healthz at https://192.168.50.239:8443/healthz ...
	I1006 15:01:00.459657  800014 api_server.go:279] https://192.168.50.239:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1006 15:01:00.459692  800014 api_server.go:103] status: https://192.168.50.239:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1006 15:01:00.779323  800014 api_server.go:253] Checking apiserver healthz at https://192.168.50.239:8443/healthz ...
	I1006 15:01:00.785137  800014 api_server.go:279] https://192.168.50.239:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1006 15:01:00.785168  800014 api_server.go:103] status: https://192.168.50.239:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1006 15:01:01.278828  800014 api_server.go:253] Checking apiserver healthz at https://192.168.50.239:8443/healthz ...
	I1006 15:01:01.284451  800014 api_server.go:279] https://192.168.50.239:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1006 15:01:01.284483  800014 api_server.go:103] status: https://192.168.50.239:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1006 15:01:01.778729  800014 api_server.go:253] Checking apiserver healthz at https://192.168.50.239:8443/healthz ...
	I1006 15:01:01.786751  800014 api_server.go:279] https://192.168.50.239:8443/healthz returned 200:
	ok
	I1006 15:01:01.796610  800014 api_server.go:141] control plane version: v1.34.1
	I1006 15:01:01.796649  800014 api_server.go:131] duration metric: took 3.518117302s to wait for apiserver health ...
	I1006 15:01:01.796662  800014 cni.go:84] Creating CNI manager for ""
	I1006 15:01:01.796671  800014 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1006 15:01:01.798462  800014 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1006 15:01:01.799977  800014 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1006 15:01:01.819614  800014 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1006 15:01:01.853903  800014 system_pods.go:43] waiting for kube-system pods to appear ...
	I1006 15:01:01.866826  800014 system_pods.go:59] 9 kube-system pods found
	I1006 15:01:01.866872  800014 system_pods.go:61] "coredns-66bc5c9577-55m7s" [7fe1d7e6-ecbc-417c-b1ee-a30e670d187b] Failed / Ready:PodFailed / ContainersReady:PodFailed
	I1006 15:01:01.866883  800014 system_pods.go:61] "coredns-66bc5c9577-v4z6x" [614cd5e8-2ac0-433c-bb16-fbce3d39e809] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 15:01:01.866893  800014 system_pods.go:61] "etcd-newest-cni-320304" [3632212e-1c4b-4d23-a28f-e60675a8a306] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1006 15:01:01.866903  800014 system_pods.go:61] "kube-apiserver-newest-cni-320304" [052bd98a-6531-49ce-bcb1-ea70e70fc58f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1006 15:01:01.866955  800014 system_pods.go:61] "kube-controller-manager-newest-cni-320304" [bfa6e8c5-2f9f-426f-b5ab-5aedf82c88d1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1006 15:01:01.866967  800014 system_pods.go:61] "kube-proxy-phcsm" [ee3d0757-cc0a-4b5f-859b-a91a5c17640a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1006 15:01:01.866973  800014 system_pods.go:61] "kube-scheduler-newest-cni-320304" [a7b68391-bc44-4054-a74e-d08f7d0a3699] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1006 15:01:01.866978  800014 system_pods.go:61] "metrics-server-746fcd58dc-zpwx2" [1a3db77c-f7a0-4f6c-8d9a-29d112d94b0a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1006 15:01:01.866987  800014 system_pods.go:61] "storage-provisioner" [44871329-535f-4e66-9196-37c9d0b1adff] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1006 15:01:01.866993  800014 system_pods.go:74] duration metric: took 13.068903ms to wait for pod list to return data ...
	I1006 15:01:01.867003  800014 node_conditions.go:102] verifying NodePressure condition ...
	I1006 15:01:01.872661  800014 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1006 15:01:01.872694  800014 node_conditions.go:123] node cpu capacity is 2
	I1006 15:01:01.872709  800014 node_conditions.go:105] duration metric: took 5.701224ms to run NodePressure ...
	I1006 15:01:01.872778  800014 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1006 15:01:02.259688  800014 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1006 15:01:02.293715  800014 ops.go:34] apiserver oom_adj: -16
	I1006 15:01:02.293742  800014 kubeadm.go:601] duration metric: took 7.381416782s to restartPrimaryControlPlane
	I1006 15:01:02.293754  800014 kubeadm.go:402] duration metric: took 7.440596672s to StartCluster
	I1006 15:01:02.293779  800014 settings.go:142] acquiring lock: {Name:mk95ac14a932277c5d6f71123bdccb175d870212 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 15:01:02.293930  800014 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21701-739942/kubeconfig
	I1006 15:01:02.295011  800014 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-739942/kubeconfig: {Name:mkb3c6455f820b9fd25629981fabc6cb3d63fb4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 15:01:02.295289  800014 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.239 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 15:01:02.295386  800014 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1006 15:01:02.295488  800014 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-320304"
	I1006 15:01:02.295514  800014 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-320304"
	I1006 15:01:02.295507  800014 addons.go:69] Setting default-storageclass=true in profile "newest-cni-320304"
	W1006 15:01:02.295528  800014 addons.go:247] addon storage-provisioner should already be in state true
	I1006 15:01:02.295538  800014 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-320304"
	I1006 15:01:02.295564  800014 config.go:182] Loaded profile config "newest-cni-320304": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 15:01:02.295610  800014 addons.go:69] Setting dashboard=true in profile "newest-cni-320304"
	I1006 15:01:02.295648  800014 addons.go:238] Setting addon dashboard=true in "newest-cni-320304"
	W1006 15:01:02.295657  800014 addons.go:247] addon dashboard should already be in state true
	I1006 15:01:02.295682  800014 host.go:66] Checking if "newest-cni-320304" exists ...
	I1006 15:01:02.295688  800014 addons.go:69] Setting metrics-server=true in profile "newest-cni-320304"
	I1006 15:01:02.295719  800014 addons.go:238] Setting addon metrics-server=true in "newest-cni-320304"
	W1006 15:01:02.295733  800014 addons.go:247] addon metrics-server should already be in state true
	I1006 15:01:02.295780  800014 host.go:66] Checking if "newest-cni-320304" exists ...
	I1006 15:01:02.295567  800014 host.go:66] Checking if "newest-cni-320304" exists ...
	I1006 15:01:02.295999  800014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 15:01:02.296049  800014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 15:01:02.296133  800014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 15:01:02.296186  800014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 15:01:02.296196  800014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 15:01:02.296228  800014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 15:01:02.296303  800014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 15:01:02.296346  800014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 15:01:02.297059  800014 out.go:179] * Verifying Kubernetes components...
	I1006 15:01:02.298471  800014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 15:01:02.312221  800014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35815
	I1006 15:01:02.313068  800014 main.go:141] libmachine: () Calling .GetVersion
	I1006 15:01:02.313911  800014 main.go:141] libmachine: Using API Version  1
	I1006 15:01:02.313942  800014 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 15:01:02.314387  800014 main.go:141] libmachine: () Calling .GetMachineName
	I1006 15:01:02.314636  800014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34319
	I1006 15:01:02.314668  800014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38419
	I1006 15:01:02.315109  800014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 15:01:02.315075  800014 main.go:141] libmachine: () Calling .GetVersion
	I1006 15:01:02.315160  800014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 15:01:02.315216  800014 main.go:141] libmachine: () Calling .GetVersion
	I1006 15:01:02.315346  800014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41453
	I1006 15:01:02.315758  800014 main.go:141] libmachine: Using API Version  1
	I1006 15:01:02.315778  800014 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 15:01:02.315894  800014 main.go:141] libmachine: () Calling .GetVersion
	I1006 15:01:02.316248  800014 main.go:141] libmachine: Using API Version  1
	I1006 15:01:02.316266  800014 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 15:01:02.316343  800014 main.go:141] libmachine: () Calling .GetMachineName
	I1006 15:01:02.316475  800014 main.go:141] libmachine: Using API Version  1
	I1006 15:01:02.316489  800014 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 15:01:02.316669  800014 main.go:141] libmachine: () Calling .GetMachineName
	I1006 15:01:02.316897  800014 main.go:141] libmachine: () Calling .GetMachineName
	I1006 15:01:02.316937  800014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 15:01:02.316922  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetState
	I1006 15:01:02.316997  800014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 15:01:02.317608  800014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 15:01:02.317660  800014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 15:01:02.320976  800014 addons.go:238] Setting addon default-storageclass=true in "newest-cni-320304"
	W1006 15:01:02.321015  800014 addons.go:247] addon default-storageclass should already be in state true
	I1006 15:01:02.321051  800014 host.go:66] Checking if "newest-cni-320304" exists ...
	I1006 15:01:02.321470  800014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 15:01:02.321527  800014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 15:01:02.331743  800014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34479
	I1006 15:01:02.331922  800014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33197
	I1006 15:01:02.332328  800014 main.go:141] libmachine: () Calling .GetVersion
	I1006 15:01:02.332416  800014 main.go:141] libmachine: () Calling .GetVersion
	I1006 15:01:02.332881  800014 main.go:141] libmachine: Using API Version  1
	I1006 15:01:02.332908  800014 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 15:01:02.333088  800014 main.go:141] libmachine: Using API Version  1
	I1006 15:01:02.333109  800014 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 15:01:02.333257  800014 main.go:141] libmachine: () Calling .GetMachineName
	I1006 15:01:02.333467  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetState
	I1006 15:01:02.333486  800014 main.go:141] libmachine: () Calling .GetMachineName
	I1006 15:01:02.333692  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetState
	I1006 15:01:02.336386  800014 main.go:141] libmachine: (newest-cni-320304) Calling .DriverName
	I1006 15:01:02.336575  800014 main.go:141] libmachine: (newest-cni-320304) Calling .DriverName
	I1006 15:01:02.337146  800014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35083
	I1006 15:01:02.337559  800014 main.go:141] libmachine: () Calling .GetVersion
	I1006 15:01:02.338092  800014 main.go:141] libmachine: Using API Version  1
	I1006 15:01:02.338137  800014 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 15:01:02.338526  800014 main.go:141] libmachine: () Calling .GetMachineName
	I1006 15:01:02.338644  800014 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1006 15:01:02.338646  800014 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1006 15:01:02.338783  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetState
	I1006 15:01:02.340419  800014 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1006 15:01:02.340441  800014 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1006 15:01:02.340464  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetSSHHostname
	I1006 15:01:02.341257  800014 main.go:141] libmachine: (newest-cni-320304) Calling .DriverName
	I1006 15:01:02.341801  800014 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1006 15:01:02.343016  800014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36691
	I1006 15:01:02.343267  800014 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1006 15:01:02.343292  800014 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1006 15:01:02.343297  800014 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1006 15:01:02.343313  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetSSHHostname
	I1006 15:01:02.343546  800014 main.go:141] libmachine: () Calling .GetVersion
	I1006 15:01:02.344119  800014 main.go:141] libmachine: Using API Version  1
	I1006 15:01:02.344149  800014 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 15:01:02.344574  800014 main.go:141] libmachine: () Calling .GetMachineName
	I1006 15:01:02.344939  800014 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 15:01:02.344962  800014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1006 15:01:02.344983  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetSSHHostname
	I1006 15:01:02.345459  800014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 15:01:02.345516  800014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 15:01:02.345554  800014 main.go:141] libmachine: (newest-cni-320304) DBG | domain newest-cni-320304 has defined MAC address 52:54:00:e9:18:22 in network mk-newest-cni-320304
	I1006 15:01:02.346392  800014 main.go:141] libmachine: (newest-cni-320304) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:18:22", ip: ""} in network mk-newest-cni-320304: {Iface:virbr2 ExpiryTime:2025-10-06 16:00:43 +0000 UTC Type:0 Mac:52:54:00:e9:18:22 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:newest-cni-320304 Clientid:01:52:54:00:e9:18:22}
	I1006 15:01:02.346430  800014 main.go:141] libmachine: (newest-cni-320304) DBG | domain newest-cni-320304 has defined IP address 192.168.50.239 and MAC address 52:54:00:e9:18:22 in network mk-newest-cni-320304
	I1006 15:01:02.346668  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetSSHPort
	I1006 15:01:02.346860  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetSSHKeyPath
	I1006 15:01:02.347022  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetSSHUsername
	I1006 15:01:02.347186  800014 sshutil.go:53] new ssh client: &{IP:192.168.50.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/newest-cni-320304/id_rsa Username:docker}
	I1006 15:01:02.347999  800014 main.go:141] libmachine: (newest-cni-320304) DBG | domain newest-cni-320304 has defined MAC address 52:54:00:e9:18:22 in network mk-newest-cni-320304
	I1006 15:01:02.348672  800014 main.go:141] libmachine: (newest-cni-320304) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:18:22", ip: ""} in network mk-newest-cni-320304: {Iface:virbr2 ExpiryTime:2025-10-06 16:00:43 +0000 UTC Type:0 Mac:52:54:00:e9:18:22 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:newest-cni-320304 Clientid:01:52:54:00:e9:18:22}
	I1006 15:01:02.348728  800014 main.go:141] libmachine: (newest-cni-320304) DBG | domain newest-cni-320304 has defined IP address 192.168.50.239 and MAC address 52:54:00:e9:18:22 in network mk-newest-cni-320304
	I1006 15:01:02.348932  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetSSHPort
	I1006 15:01:02.349102  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetSSHKeyPath
	I1006 15:01:02.349299  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetSSHUsername
	I1006 15:01:02.349459  800014 sshutil.go:53] new ssh client: &{IP:192.168.50.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/newest-cni-320304/id_rsa Username:docker}
	I1006 15:01:02.350117  800014 main.go:141] libmachine: (newest-cni-320304) DBG | domain newest-cni-320304 has defined MAC address 52:54:00:e9:18:22 in network mk-newest-cni-320304
	I1006 15:01:02.350687  800014 main.go:141] libmachine: (newest-cni-320304) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:18:22", ip: ""} in network mk-newest-cni-320304: {Iface:virbr2 ExpiryTime:2025-10-06 16:00:43 +0000 UTC Type:0 Mac:52:54:00:e9:18:22 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:newest-cni-320304 Clientid:01:52:54:00:e9:18:22}
	I1006 15:01:02.350708  800014 main.go:141] libmachine: (newest-cni-320304) DBG | domain newest-cni-320304 has defined IP address 192.168.50.239 and MAC address 52:54:00:e9:18:22 in network mk-newest-cni-320304
	I1006 15:01:02.351111  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetSSHPort
	I1006 15:01:02.351287  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetSSHKeyPath
	I1006 15:01:02.351425  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetSSHUsername
	I1006 15:01:02.351549  800014 sshutil.go:53] new ssh client: &{IP:192.168.50.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/newest-cni-320304/id_rsa Username:docker}
	I1006 15:01:02.367602  800014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43563
	I1006 15:01:02.368163  800014 main.go:141] libmachine: () Calling .GetVersion
	I1006 15:01:02.368646  800014 main.go:141] libmachine: Using API Version  1
	I1006 15:01:02.368671  800014 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 15:01:02.369177  800014 main.go:141] libmachine: () Calling .GetMachineName
	I1006 15:01:02.369450  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetState
	I1006 15:01:02.371973  800014 main.go:141] libmachine: (newest-cni-320304) Calling .DriverName
	I1006 15:01:02.372293  800014 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1006 15:01:02.372313  800014 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1006 15:01:02.372345  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetSSHHostname
	I1006 15:01:02.377094  800014 main.go:141] libmachine: (newest-cni-320304) DBG | domain newest-cni-320304 has defined MAC address 52:54:00:e9:18:22 in network mk-newest-cni-320304
	I1006 15:01:02.377951  800014 main.go:141] libmachine: (newest-cni-320304) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:18:22", ip: ""} in network mk-newest-cni-320304: {Iface:virbr2 ExpiryTime:2025-10-06 16:00:43 +0000 UTC Type:0 Mac:52:54:00:e9:18:22 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:newest-cni-320304 Clientid:01:52:54:00:e9:18:22}
	I1006 15:01:02.377986  800014 main.go:141] libmachine: (newest-cni-320304) DBG | domain newest-cni-320304 has defined IP address 192.168.50.239 and MAC address 52:54:00:e9:18:22 in network mk-newest-cni-320304
	I1006 15:01:02.378239  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetSSHPort
	I1006 15:01:02.378486  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetSSHKeyPath
	I1006 15:01:02.378720  800014 main.go:141] libmachine: (newest-cni-320304) Calling .GetSSHUsername
	I1006 15:01:02.378903  800014 sshutil.go:53] new ssh client: &{IP:192.168.50.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/newest-cni-320304/id_rsa Username:docker}
	I1006 15:01:02.669492  800014 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 15:01:02.717879  800014 api_server.go:52] waiting for apiserver process to appear ...
	I1006 15:01:02.717982  800014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 15:01:02.775914  800014 api_server.go:72] duration metric: took 480.583079ms to wait for apiserver process to appear ...
	I1006 15:01:02.775956  800014 api_server.go:88] waiting for apiserver healthz status ...
	I1006 15:01:02.775990  800014 api_server.go:253] Checking apiserver healthz at https://192.168.50.239:8443/healthz ...
	I1006 15:01:02.789294  800014 api_server.go:279] https://192.168.50.239:8443/healthz returned 200:
	ok
	I1006 15:01:02.791066  800014 api_server.go:141] control plane version: v1.34.1
	I1006 15:01:02.791096  800014 api_server.go:131] duration metric: took 15.132998ms to wait for apiserver health ...
	I1006 15:01:02.791114  800014 system_pods.go:43] waiting for kube-system pods to appear ...
	I1006 15:01:02.805455  800014 system_pods.go:59] 8 kube-system pods found
	I1006 15:01:02.805498  800014 system_pods.go:61] "coredns-66bc5c9577-v4z6x" [614cd5e8-2ac0-433c-bb16-fbce3d39e809] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 15:01:02.805510  800014 system_pods.go:61] "etcd-newest-cni-320304" [3632212e-1c4b-4d23-a28f-e60675a8a306] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1006 15:01:02.805522  800014 system_pods.go:61] "kube-apiserver-newest-cni-320304" [052bd98a-6531-49ce-bcb1-ea70e70fc58f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1006 15:01:02.805530  800014 system_pods.go:61] "kube-controller-manager-newest-cni-320304" [bfa6e8c5-2f9f-426f-b5ab-5aedf82c88d1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1006 15:01:02.805537  800014 system_pods.go:61] "kube-proxy-phcsm" [ee3d0757-cc0a-4b5f-859b-a91a5c17640a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1006 15:01:02.805544  800014 system_pods.go:61] "kube-scheduler-newest-cni-320304" [a7b68391-bc44-4054-a74e-d08f7d0a3699] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1006 15:01:02.805553  800014 system_pods.go:61] "metrics-server-746fcd58dc-zpwx2" [1a3db77c-f7a0-4f6c-8d9a-29d112d94b0a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1006 15:01:02.805560  800014 system_pods.go:61] "storage-provisioner" [44871329-535f-4e66-9196-37c9d0b1adff] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1006 15:01:02.805571  800014 system_pods.go:74] duration metric: took 14.447321ms to wait for pod list to return data ...
	I1006 15:01:02.805584  800014 default_sa.go:34] waiting for default service account to be created ...
	I1006 15:01:02.805825  800014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1006 15:01:02.810526  800014 default_sa.go:45] found service account: "default"
	I1006 15:01:02.810559  800014 default_sa.go:55] duration metric: took 4.953414ms for default service account to be created ...
	I1006 15:01:02.810576  800014 kubeadm.go:586] duration metric: took 515.254934ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1006 15:01:02.810620  800014 node_conditions.go:102] verifying NodePressure condition ...
	I1006 15:01:02.818438  800014 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1006 15:01:02.818464  800014 node_conditions.go:123] node cpu capacity is 2
	I1006 15:01:02.818479  800014 node_conditions.go:105] duration metric: took 7.852614ms to run NodePressure ...
	I1006 15:01:02.818494  800014 start.go:241] waiting for startup goroutines ...
	I1006 15:01:02.822101  800014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 15:01:03.049435  800014 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1006 15:01:03.049462  800014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1006 15:01:03.054200  800014 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1006 15:01:03.054238  800014 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1006 15:01:03.135273  800014 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1006 15:01:03.135302  800014 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1006 15:01:03.166807  800014 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1006 15:01:03.166840  800014 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1006 15:01:03.288880  800014 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1006 15:01:03.288914  800014 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1006 15:01:03.297671  800014 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1006 15:01:03.297700  800014 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1006 15:01:03.412390  800014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1006 15:01:03.413627  800014 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1006 15:01:03.413655  800014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1006 15:01:03.451647  800014 main.go:141] libmachine: Making call to close driver server
	I1006 15:01:03.451674  800014 main.go:141] libmachine: (newest-cni-320304) Calling .Close
	I1006 15:01:03.452163  800014 main.go:141] libmachine: Successfully made call to close driver server
	I1006 15:01:03.452196  800014 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 15:01:03.452228  800014 main.go:141] libmachine: Making call to close driver server
	I1006 15:01:03.452243  800014 main.go:141] libmachine: (newest-cni-320304) Calling .Close
	I1006 15:01:03.452520  800014 main.go:141] libmachine: Successfully made call to close driver server
	I1006 15:01:03.452537  800014 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 15:01:03.452558  800014 main.go:141] libmachine: (newest-cni-320304) DBG | Closing plugin on server side
	I1006 15:01:03.467538  800014 main.go:141] libmachine: Making call to close driver server
	I1006 15:01:03.467572  800014 main.go:141] libmachine: (newest-cni-320304) Calling .Close
	I1006 15:01:03.467906  800014 main.go:141] libmachine: (newest-cni-320304) DBG | Closing plugin on server side
	I1006 15:01:03.467980  800014 main.go:141] libmachine: Successfully made call to close driver server
	I1006 15:01:03.468001  800014 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 15:01:03.483185  800014 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1006 15:01:03.483217  800014 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1006 15:01:03.594979  800014 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1006 15:01:03.595029  800014 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1006 15:01:03.674119  800014 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1006 15:01:03.674159  800014 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1006 15:01:03.854137  800014 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1006 15:01:03.854180  800014 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1006 15:01:04.063326  800014 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1006 15:01:04.063375  800014 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1006 15:01:04.175099  800014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1006 15:01:05.072329  800014 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.250182673s)
	I1006 15:01:05.072402  800014 main.go:141] libmachine: Making call to close driver server
	I1006 15:01:05.072416  800014 main.go:141] libmachine: (newest-cni-320304) Calling .Close
	I1006 15:01:05.072799  800014 main.go:141] libmachine: (newest-cni-320304) DBG | Closing plugin on server side
	I1006 15:01:05.072842  800014 main.go:141] libmachine: Successfully made call to close driver server
	I1006 15:01:05.072856  800014 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 15:01:05.072868  800014 main.go:141] libmachine: Making call to close driver server
	I1006 15:01:05.072881  800014 main.go:141] libmachine: (newest-cni-320304) Calling .Close
	I1006 15:01:05.073164  800014 main.go:141] libmachine: Successfully made call to close driver server
	I1006 15:01:05.073185  800014 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 15:01:05.073199  800014 main.go:141] libmachine: (newest-cni-320304) DBG | Closing plugin on server side
	I1006 15:01:05.177304  800014 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.764850606s)
	I1006 15:01:05.177374  800014 main.go:141] libmachine: Making call to close driver server
	I1006 15:01:05.177386  800014 main.go:141] libmachine: (newest-cni-320304) Calling .Close
	I1006 15:01:05.177727  800014 main.go:141] libmachine: Successfully made call to close driver server
	I1006 15:01:05.177744  800014 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 15:01:05.177759  800014 main.go:141] libmachine: Making call to close driver server
	I1006 15:01:05.177767  800014 main.go:141] libmachine: (newest-cni-320304) Calling .Close
	I1006 15:01:05.177771  800014 main.go:141] libmachine: (newest-cni-320304) DBG | Closing plugin on server side
	I1006 15:01:05.178034  800014 main.go:141] libmachine: Successfully made call to close driver server
	I1006 15:01:05.178051  800014 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 15:01:05.178062  800014 addons.go:479] Verifying addon metrics-server=true in "newest-cni-320304"
	I1006 15:01:05.178067  800014 main.go:141] libmachine: (newest-cni-320304) DBG | Closing plugin on server side
	I1006 15:01:05.756548  800014 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.581376244s)
	I1006 15:01:05.756630  800014 main.go:141] libmachine: Making call to close driver server
	I1006 15:01:05.756647  800014 main.go:141] libmachine: (newest-cni-320304) Calling .Close
	I1006 15:01:05.757025  800014 main.go:141] libmachine: Successfully made call to close driver server
	I1006 15:01:05.757047  800014 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 15:01:05.757059  800014 main.go:141] libmachine: Making call to close driver server
	I1006 15:01:05.757067  800014 main.go:141] libmachine: (newest-cni-320304) Calling .Close
	I1006 15:01:05.757115  800014 main.go:141] libmachine: (newest-cni-320304) DBG | Closing plugin on server side
	I1006 15:01:05.757378  800014 main.go:141] libmachine: Successfully made call to close driver server
	I1006 15:01:05.757397  800014 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 15:01:05.757414  800014 main.go:141] libmachine: (newest-cni-320304) DBG | Closing plugin on server side
	I1006 15:01:05.759908  800014 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-320304 addons enable metrics-server
	
	I1006 15:01:05.761801  800014 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I1006 15:01:05.763097  800014 addons.go:514] duration metric: took 3.467723069s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I1006 15:01:05.763141  800014 start.go:246] waiting for cluster config update ...
	I1006 15:01:05.763155  800014 start.go:255] writing updated cluster config ...
	I1006 15:01:05.763421  800014 ssh_runner.go:195] Run: rm -f paused
	I1006 15:01:05.822944  800014 start.go:623] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1006 15:01:05.824439  800014 out.go:179] * Done! kubectl is now configured to use "newest-cni-320304" cluster and "default" namespace by default
	I1006 15:02:14.780190  781281 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.00032195s
	I1006 15:02:14.780381  781281 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000528905s
	I1006 15:02:14.780538  781281 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000204742s
	I1006 15:02:14.780550  781281 kubeadm.go:318] 
	I1006 15:02:14.780690  781281 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1006 15:02:14.780822  781281 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1006 15:02:14.780956  781281 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1006 15:02:14.781092  781281 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1006 15:02:14.781196  781281 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1006 15:02:14.781305  781281 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1006 15:02:14.781316  781281 kubeadm.go:318] 
	I1006 15:02:14.784045  781281 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 15:02:14.784725  781281 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.39.45:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1006 15:02:14.784809  781281 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1006 15:02:14.784898  781281 kubeadm.go:402] duration metric: took 12m15.846176939s to StartCluster
	I1006 15:02:14.784955  781281 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 15:02:14.785014  781281 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 15:02:14.834909  781281 cri.go:89] found id: ""
	I1006 15:02:14.834952  781281 logs.go:282] 0 containers: []
	W1006 15:02:14.834961  781281 logs.go:284] No container was found matching "kube-apiserver"
	I1006 15:02:14.834978  781281 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 15:02:14.835042  781281 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 15:02:14.877368  781281 cri.go:89] found id: "7d267996d44cd2b7da2cb930a76223e8fa651feae8248366eefcd22ecfa7099c"
	I1006 15:02:14.877397  781281 cri.go:89] found id: ""
	I1006 15:02:14.877409  781281 logs.go:282] 1 containers: [7d267996d44cd2b7da2cb930a76223e8fa651feae8248366eefcd22ecfa7099c]
	I1006 15:02:14.877480  781281 ssh_runner.go:195] Run: which crictl
	I1006 15:02:14.882993  781281 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 15:02:14.883091  781281 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 15:02:14.923480  781281 cri.go:89] found id: ""
	I1006 15:02:14.923511  781281 logs.go:282] 0 containers: []
	W1006 15:02:14.923522  781281 logs.go:284] No container was found matching "coredns"
	I1006 15:02:14.923530  781281 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 15:02:14.923608  781281 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 15:02:14.963875  781281 cri.go:89] found id: ""
	I1006 15:02:14.963907  781281 logs.go:282] 0 containers: []
	W1006 15:02:14.963916  781281 logs.go:284] No container was found matching "kube-scheduler"
	I1006 15:02:14.963922  781281 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 15:02:14.963990  781281 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 15:02:15.007141  781281 cri.go:89] found id: ""
	I1006 15:02:15.007171  781281 logs.go:282] 0 containers: []
	W1006 15:02:15.007183  781281 logs.go:284] No container was found matching "kube-proxy"
	I1006 15:02:15.007191  781281 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 15:02:15.007252  781281 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 15:02:15.046441  781281 cri.go:89] found id: ""
	I1006 15:02:15.046474  781281 logs.go:282] 0 containers: []
	W1006 15:02:15.046481  781281 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 15:02:15.046489  781281 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 15:02:15.046553  781281 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 15:02:15.086922  781281 cri.go:89] found id: ""
	I1006 15:02:15.086949  781281 logs.go:282] 0 containers: []
	W1006 15:02:15.086958  781281 logs.go:284] No container was found matching "kindnet"
	I1006 15:02:15.086964  781281 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1006 15:02:15.087030  781281 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1006 15:02:15.126286  781281 cri.go:89] found id: ""
	I1006 15:02:15.126321  781281 logs.go:282] 0 containers: []
	W1006 15:02:15.126334  781281 logs.go:284] No container was found matching "storage-provisioner"
	I1006 15:02:15.126348  781281 logs.go:123] Gathering logs for kubelet ...
	I1006 15:02:15.126365  781281 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 15:02:15.247624  781281 logs.go:123] Gathering logs for dmesg ...
	I1006 15:02:15.247667  781281 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 15:02:15.263920  781281 logs.go:123] Gathering logs for describe nodes ...
	I1006 15:02:15.263955  781281 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 15:02:15.336637  781281 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 15:02:15.336680  781281 logs.go:123] Gathering logs for etcd [7d267996d44cd2b7da2cb930a76223e8fa651feae8248366eefcd22ecfa7099c] ...
	I1006 15:02:15.336703  781281 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d267996d44cd2b7da2cb930a76223e8fa651feae8248366eefcd22ecfa7099c"
	I1006 15:02:15.382303  781281 logs.go:123] Gathering logs for CRI-O ...
	I1006 15:02:15.382337  781281 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 15:02:15.567714  781281 logs.go:123] Gathering logs for container status ...
	I1006 15:02:15.567763  781281 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1006 15:02:15.612090  781281 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.002750262s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.39.45:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.00032195s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000528905s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000204742s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.39.45:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1006 15:02:15.612221  781281 out.go:285] * 
	W1006 15:02:15.612322  781281 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.002750262s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.39.45:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.00032195s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000528905s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000204742s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.39.45:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1006 15:02:15.612342  781281 out.go:285] * 
	W1006 15:02:15.614244  781281 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 15:02:15.617843  781281 out.go:203] 
	W1006 15:02:15.619160  781281 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.002750262s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.39.45:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.00032195s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000528905s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000204742s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.39.45:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1006 15:02:15.619216  781281 out.go:285] * 
	I1006 15:02:15.620666  781281 out.go:203] 
	
	
	==> CRI-O <==
	Oct 06 15:02:16 kubernetes-upgrade-317912 crio[3116]: time="2025-10-06 15:02:16.296125901Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759762936296055722,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=425f35b8-54f9-47ed-b4a4-fa30fd0048d2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 06 15:02:16 kubernetes-upgrade-317912 crio[3116]: time="2025-10-06 15:02:16.296827600Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fd14094d-d0c9-4fa8-b147-50dfacdc4859 name=/runtime.v1.RuntimeService/ListContainers
	Oct 06 15:02:16 kubernetes-upgrade-317912 crio[3116]: time="2025-10-06 15:02:16.297161038Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fd14094d-d0c9-4fa8-b147-50dfacdc4859 name=/runtime.v1.RuntimeService/ListContainers
	Oct 06 15:02:16 kubernetes-upgrade-317912 crio[3116]: time="2025-10-06 15:02:16.297292285Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7d267996d44cd2b7da2cb930a76223e8fa651feae8248366eefcd22ecfa7099c,PodSandboxId:1f03b04acf12a27caabda0c47371cec98a87612ea3eed630cc9aaa80d5da2d13,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759762694989272360,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-317912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29b1e01bc54677bcd5a38ecf2b90a162,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":23
81,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fd14094d-d0c9-4fa8-b147-50dfacdc4859 name=/runtime.v1.RuntimeService/ListContainers
	Oct 06 15:02:16 kubernetes-upgrade-317912 crio[3116]: time="2025-10-06 15:02:16.333493574Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fb2c44f7-3097-49c2-bdc8-eecf8561b9bc name=/runtime.v1.RuntimeService/Version
	Oct 06 15:02:16 kubernetes-upgrade-317912 crio[3116]: time="2025-10-06 15:02:16.333612396Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fb2c44f7-3097-49c2-bdc8-eecf8561b9bc name=/runtime.v1.RuntimeService/Version
	Oct 06 15:02:16 kubernetes-upgrade-317912 crio[3116]: time="2025-10-06 15:02:16.334603909Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e0f9b8e3-7392-434c-9500-1efa07cff03c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 06 15:02:16 kubernetes-upgrade-317912 crio[3116]: time="2025-10-06 15:02:16.334970042Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759762936334951304,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e0f9b8e3-7392-434c-9500-1efa07cff03c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 06 15:02:16 kubernetes-upgrade-317912 crio[3116]: time="2025-10-06 15:02:16.335602683Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=81d84d34-7d12-4082-9252-74353bd98040 name=/runtime.v1.RuntimeService/ListContainers
	Oct 06 15:02:16 kubernetes-upgrade-317912 crio[3116]: time="2025-10-06 15:02:16.335706291Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=81d84d34-7d12-4082-9252-74353bd98040 name=/runtime.v1.RuntimeService/ListContainers
	Oct 06 15:02:16 kubernetes-upgrade-317912 crio[3116]: time="2025-10-06 15:02:16.335784588Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7d267996d44cd2b7da2cb930a76223e8fa651feae8248366eefcd22ecfa7099c,PodSandboxId:1f03b04acf12a27caabda0c47371cec98a87612ea3eed630cc9aaa80d5da2d13,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759762694989272360,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-317912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29b1e01bc54677bcd5a38ecf2b90a162,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":23
81,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=81d84d34-7d12-4082-9252-74353bd98040 name=/runtime.v1.RuntimeService/ListContainers
	Oct 06 15:02:16 kubernetes-upgrade-317912 crio[3116]: time="2025-10-06 15:02:16.373365766Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0c43043c-24e6-4aac-8d10-3bc4041ed231 name=/runtime.v1.RuntimeService/Version
	Oct 06 15:02:16 kubernetes-upgrade-317912 crio[3116]: time="2025-10-06 15:02:16.373513579Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0c43043c-24e6-4aac-8d10-3bc4041ed231 name=/runtime.v1.RuntimeService/Version
	Oct 06 15:02:16 kubernetes-upgrade-317912 crio[3116]: time="2025-10-06 15:02:16.375231121Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=18f6db24-f453-45e0-a6cd-12bd4e0b2c97 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 06 15:02:16 kubernetes-upgrade-317912 crio[3116]: time="2025-10-06 15:02:16.375850242Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759762936375826057,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=18f6db24-f453-45e0-a6cd-12bd4e0b2c97 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 06 15:02:16 kubernetes-upgrade-317912 crio[3116]: time="2025-10-06 15:02:16.376751924Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b361ccf1-ab31-4363-884c-8c93db028d57 name=/runtime.v1.RuntimeService/ListContainers
	Oct 06 15:02:16 kubernetes-upgrade-317912 crio[3116]: time="2025-10-06 15:02:16.376939805Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b361ccf1-ab31-4363-884c-8c93db028d57 name=/runtime.v1.RuntimeService/ListContainers
	Oct 06 15:02:16 kubernetes-upgrade-317912 crio[3116]: time="2025-10-06 15:02:16.377033301Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7d267996d44cd2b7da2cb930a76223e8fa651feae8248366eefcd22ecfa7099c,PodSandboxId:1f03b04acf12a27caabda0c47371cec98a87612ea3eed630cc9aaa80d5da2d13,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759762694989272360,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-317912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29b1e01bc54677bcd5a38ecf2b90a162,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":23
81,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b361ccf1-ab31-4363-884c-8c93db028d57 name=/runtime.v1.RuntimeService/ListContainers
	Oct 06 15:02:16 kubernetes-upgrade-317912 crio[3116]: time="2025-10-06 15:02:16.412934733Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=da2edc27-d1cd-4b0e-a7a3-4016e1969864 name=/runtime.v1.RuntimeService/Version
	Oct 06 15:02:16 kubernetes-upgrade-317912 crio[3116]: time="2025-10-06 15:02:16.413007408Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=da2edc27-d1cd-4b0e-a7a3-4016e1969864 name=/runtime.v1.RuntimeService/Version
	Oct 06 15:02:16 kubernetes-upgrade-317912 crio[3116]: time="2025-10-06 15:02:16.414238695Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fd3f57fd-0e75-4cb1-8b67-4a82b1b0eedb name=/runtime.v1.ImageService/ImageFsInfo
	Oct 06 15:02:16 kubernetes-upgrade-317912 crio[3116]: time="2025-10-06 15:02:16.414646890Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759762936414627366,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fd3f57fd-0e75-4cb1-8b67-4a82b1b0eedb name=/runtime.v1.ImageService/ImageFsInfo
	Oct 06 15:02:16 kubernetes-upgrade-317912 crio[3116]: time="2025-10-06 15:02:16.415220415Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=36341807-9f96-45c7-9035-6b4b955e7c7d name=/runtime.v1.RuntimeService/ListContainers
	Oct 06 15:02:16 kubernetes-upgrade-317912 crio[3116]: time="2025-10-06 15:02:16.415448791Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=36341807-9f96-45c7-9035-6b4b955e7c7d name=/runtime.v1.RuntimeService/ListContainers
	Oct 06 15:02:16 kubernetes-upgrade-317912 crio[3116]: time="2025-10-06 15:02:16.415529485Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7d267996d44cd2b7da2cb930a76223e8fa651feae8248366eefcd22ecfa7099c,PodSandboxId:1f03b04acf12a27caabda0c47371cec98a87612ea3eed630cc9aaa80d5da2d13,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759762694989272360,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-317912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29b1e01bc54677bcd5a38ecf2b90a162,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":23
81,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=36341807-9f96-45c7-9035-6b4b955e7c7d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	7d267996d44cd       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   4 minutes ago       Running             etcd                4                   1f03b04acf12a       etcd-kubernetes-upgrade-317912
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 6 14:47] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000008] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000052] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.002644] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[Oct 6 14:48] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.087595] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.113336] kauditd_printk_skb: 74 callbacks suppressed
	[  +2.728606] kauditd_printk_skb: 199 callbacks suppressed
	[  +3.893878] kauditd_printk_skb: 464 callbacks suppressed
	[Oct 6 14:50] kauditd_printk_skb: 57 callbacks suppressed
	[Oct 6 14:51] kauditd_printk_skb: 68 callbacks suppressed
	[Oct 6 14:54] kauditd_printk_skb: 110 callbacks suppressed
	[Oct 6 14:58] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [7d267996d44cd2b7da2cb930a76223e8fa651feae8248366eefcd22ecfa7099c] <==
	{"level":"info","ts":"2025-10-06T14:58:16.030365Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"d386e7203fab19ce is starting a new election at term 1"}
	{"level":"info","ts":"2025-10-06T14:58:16.030422Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"d386e7203fab19ce became pre-candidate at term 1"}
	{"level":"info","ts":"2025-10-06T14:58:16.030458Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"d386e7203fab19ce received MsgPreVoteResp from d386e7203fab19ce at term 1"}
	{"level":"info","ts":"2025-10-06T14:58:16.030468Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"d386e7203fab19ce has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-10-06T14:58:16.030482Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"d386e7203fab19ce became candidate at term 2"}
	{"level":"info","ts":"2025-10-06T14:58:16.032020Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"d386e7203fab19ce received MsgVoteResp from d386e7203fab19ce at term 2"}
	{"level":"info","ts":"2025-10-06T14:58:16.032056Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"d386e7203fab19ce has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-10-06T14:58:16.032141Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"d386e7203fab19ce became leader at term 2"}
	{"level":"info","ts":"2025-10-06T14:58:16.032156Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: d386e7203fab19ce elected leader d386e7203fab19ce at term 2"}
	{"level":"info","ts":"2025-10-06T14:58:16.033457Z","caller":"etcdserver/server.go:1804","msg":"published local member to cluster through raft","local-member-id":"d386e7203fab19ce","local-member-attributes":"{Name:kubernetes-upgrade-317912 ClientURLs:[https://192.168.39.45:2379]}","cluster-id":"34c61d36ecc5c83e","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-06T14:58:16.033539Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-06T14:58:16.033873Z","caller":"etcdserver/server.go:2404","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2025-10-06T14:58:16.034266Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-06T14:58:16.034376Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-06T14:58:16.034408Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-06T14:58:16.037223Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"warn","ts":"2025-10-06T14:58:16.037353Z","caller":"v3rpc/grpc.go:52","msg":"etcdserver: failed to register grpc metrics","error":"duplicate metrics collector registration attempted"}
	{"level":"info","ts":"2025-10-06T14:58:16.037442Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-06T14:58:16.039493Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-06T14:58:16.041270Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.45:2379"}
	{"level":"info","ts":"2025-10-06T14:58:16.041384Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"34c61d36ecc5c83e","local-member-id":"d386e7203fab19ce","cluster-version":"3.6"}
	{"level":"info","ts":"2025-10-06T14:58:16.041479Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-10-06T14:58:16.041549Z","caller":"etcdserver/server.go:2424","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-10-06T14:58:16.041956Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-10-06T14:58:16.042199Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	
	
	==> kernel <==
	 15:02:16 up 14 min,  0 users,  load average: 0.00, 0.09, 0.14
	Linux kubernetes-upgrade-317912 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kubelet <==
	Oct 06 15:02:04 kubernetes-upgrade-317912 kubelet[10224]: I1006 15:02:04.131409   10224 kubelet_node_status.go:75] "Attempting to register node" node="kubernetes-upgrade-317912"
	Oct 06 15:02:04 kubernetes-upgrade-317912 kubelet[10224]: E1006 15:02:04.132184   10224 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.39.45:8443/api/v1/nodes\": dial tcp 192.168.39.45:8443: connect: connection refused" node="kubernetes-upgrade-317912"
	Oct 06 15:02:04 kubernetes-upgrade-317912 kubelet[10224]: E1006 15:02:04.378852   10224 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759762924378355621  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 06 15:02:04 kubernetes-upgrade-317912 kubelet[10224]: E1006 15:02:04.378873   10224 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759762924378355621  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 06 15:02:08 kubernetes-upgrade-317912 kubelet[10224]: E1006 15:02:08.174505   10224 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.39.45:8443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes-upgrade-317912&limit=500&resourceVersion=0\": dial tcp 192.168.39.45:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	Oct 06 15:02:08 kubernetes-upgrade-317912 kubelet[10224]: E1006 15:02:08.292044   10224 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"kubernetes-upgrade-317912\" not found" node="kubernetes-upgrade-317912"
	Oct 06 15:02:08 kubernetes-upgrade-317912 kubelet[10224]: E1006 15:02:08.298956   10224 log.go:32] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_kube-controller-manager_kube-controller-manager-kubernetes-upgrade-317912_kube-system_34d28518ebedca1a9adfe16a771a8be3_1\" is already in use by 2aabdfe2a414c3ad398c07cbf8e3e7932c08f0638fd6c60f69c84576929aa86f. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="f9132672811c2dd24375bfa0165098a2d41d6579c96632d5c10df5c7b31ccb3f"
	Oct 06 15:02:08 kubernetes-upgrade-317912 kubelet[10224]: E1006 15:02:08.299180   10224 kuberuntime_manager.go:1449] "Unhandled Error" err="container kube-controller-manager start failed in pod kube-controller-manager-kubernetes-upgrade-317912_kube-system(34d28518ebedca1a9adfe16a771a8be3): CreateContainerError: the container name \"k8s_kube-controller-manager_kube-controller-manager-kubernetes-upgrade-317912_kube-system_34d28518ebedca1a9adfe16a771a8be3_1\" is already in use by 2aabdfe2a414c3ad398c07cbf8e3e7932c08f0638fd6c60f69c84576929aa86f. You have to remove that container to be able to reuse that name: that name is already in use" logger="UnhandledError"
	Oct 06 15:02:08 kubernetes-upgrade-317912 kubelet[10224]: E1006 15:02:08.299286   10224 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"the container name \\\"k8s_kube-controller-manager_kube-controller-manager-kubernetes-upgrade-317912_kube-system_34d28518ebedca1a9adfe16a771a8be3_1\\\" is already in use by 2aabdfe2a414c3ad398c07cbf8e3e7932c08f0638fd6c60f69c84576929aa86f. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/kube-controller-manager-kubernetes-upgrade-317912" podUID="34d28518ebedca1a9adfe16a771a8be3"
	Oct 06 15:02:09 kubernetes-upgrade-317912 kubelet[10224]: E1006 15:02:09.291340   10224 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"kubernetes-upgrade-317912\" not found" node="kubernetes-upgrade-317912"
	Oct 06 15:02:09 kubernetes-upgrade-317912 kubelet[10224]: E1006 15:02:09.298117   10224 log.go:32] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_kube-apiserver_kube-apiserver-kubernetes-upgrade-317912_kube-system_960166f3f8dc6d0876e9cc501a81fe4c_1\" is already in use by 054994408d3e12c70fda87a8db877828e3f7d71e941c3e9833c521ba8328c483. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="2254b5dc55174d90c89ff859c36bcadc1fb7de84dc4b1975c935750c11afb8ca"
	Oct 06 15:02:09 kubernetes-upgrade-317912 kubelet[10224]: E1006 15:02:09.298203   10224 kuberuntime_manager.go:1449] "Unhandled Error" err="container kube-apiserver start failed in pod kube-apiserver-kubernetes-upgrade-317912_kube-system(960166f3f8dc6d0876e9cc501a81fe4c): CreateContainerError: the container name \"k8s_kube-apiserver_kube-apiserver-kubernetes-upgrade-317912_kube-system_960166f3f8dc6d0876e9cc501a81fe4c_1\" is already in use by 054994408d3e12c70fda87a8db877828e3f7d71e941c3e9833c521ba8328c483. You have to remove that container to be able to reuse that name: that name is already in use" logger="UnhandledError"
	Oct 06 15:02:09 kubernetes-upgrade-317912 kubelet[10224]: E1006 15:02:09.298236   10224 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"the container name \\\"k8s_kube-apiserver_kube-apiserver-kubernetes-upgrade-317912_kube-system_960166f3f8dc6d0876e9cc501a81fe4c_1\\\" is already in use by 054994408d3e12c70fda87a8db877828e3f7d71e941c3e9833c521ba8328c483. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/kube-apiserver-kubernetes-upgrade-317912" podUID="960166f3f8dc6d0876e9cc501a81fe4c"
	Oct 06 15:02:09 kubernetes-upgrade-317912 kubelet[10224]: E1006 15:02:09.926905   10224 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.39.45:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.39.45:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	Oct 06 15:02:10 kubernetes-upgrade-317912 kubelet[10224]: E1006 15:02:10.922540   10224 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.45:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-317912?timeout=10s\": dial tcp 192.168.39.45:8443: connect: connection refused" interval="7s"
	Oct 06 15:02:11 kubernetes-upgrade-317912 kubelet[10224]: I1006 15:02:11.134937   10224 kubelet_node_status.go:75] "Attempting to register node" node="kubernetes-upgrade-317912"
	Oct 06 15:02:11 kubernetes-upgrade-317912 kubelet[10224]: E1006 15:02:11.135449   10224 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.39.45:8443/api/v1/nodes\": dial tcp 192.168.39.45:8443: connect: connection refused" node="kubernetes-upgrade-317912"
	Oct 06 15:02:11 kubernetes-upgrade-317912 kubelet[10224]: E1006 15:02:11.278907   10224 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.39.45:8443/api/v1/namespaces/default/events\": dial tcp 192.168.39.45:8443: connect: connection refused" event="&Event{ObjectMeta:{kubernetes-upgrade-317912.186beed8cb890fba  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:kubernetes-upgrade-317912,UID:kubernetes-upgrade-317912,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node kubernetes-upgrade-317912 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:kubernetes-upgrade-317912,},FirstTimestamp:2025-10-06 14:58:14.313414586 +0000 UTC m=+0.558628388,LastTimestamp:2025-10-06 14:58:14.313414586 +0000 UTC m=+0.558628388,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:ku
belet,ReportingInstance:kubernetes-upgrade-317912,}"
	Oct 06 15:02:12 kubernetes-upgrade-317912 kubelet[10224]: E1006 15:02:12.292044   10224 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"kubernetes-upgrade-317912\" not found" node="kubernetes-upgrade-317912"
	Oct 06 15:02:13 kubernetes-upgrade-317912 kubelet[10224]: E1006 15:02:13.291818   10224 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"kubernetes-upgrade-317912\" not found" node="kubernetes-upgrade-317912"
	Oct 06 15:02:13 kubernetes-upgrade-317912 kubelet[10224]: E1006 15:02:13.301339   10224 log.go:32] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_kube-scheduler_kube-scheduler-kubernetes-upgrade-317912_kube-system_c2425f28cdaae2dead70b477306e2dda_1\" is already in use by 49e81b28fb670ccb22f1fce223d81d2621a2800f19f90824d848f4e4a62ba572. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="780c2b64fd6aadefeacace812a166abf837495cc02f2b0361dfa58bf8bd497f7"
	Oct 06 15:02:13 kubernetes-upgrade-317912 kubelet[10224]: E1006 15:02:13.301407   10224 kuberuntime_manager.go:1449] "Unhandled Error" err="container kube-scheduler start failed in pod kube-scheduler-kubernetes-upgrade-317912_kube-system(c2425f28cdaae2dead70b477306e2dda): CreateContainerError: the container name \"k8s_kube-scheduler_kube-scheduler-kubernetes-upgrade-317912_kube-system_c2425f28cdaae2dead70b477306e2dda_1\" is already in use by 49e81b28fb670ccb22f1fce223d81d2621a2800f19f90824d848f4e4a62ba572. You have to remove that container to be able to reuse that name: that name is already in use" logger="UnhandledError"
	Oct 06 15:02:13 kubernetes-upgrade-317912 kubelet[10224]: E1006 15:02:13.301436   10224 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"the container name \\\"k8s_kube-scheduler_kube-scheduler-kubernetes-upgrade-317912_kube-system_c2425f28cdaae2dead70b477306e2dda_1\\\" is already in use by 49e81b28fb670ccb22f1fce223d81d2621a2800f19f90824d848f4e4a62ba572. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/kube-scheduler-kubernetes-upgrade-317912" podUID="c2425f28cdaae2dead70b477306e2dda"
	Oct 06 15:02:14 kubernetes-upgrade-317912 kubelet[10224]: E1006 15:02:14.381445   10224 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759762934380783439  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 06 15:02:14 kubernetes-upgrade-317912 kubelet[10224]: E1006 15:02:14.381471   10224 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759762934380783439  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-317912 -n kubernetes-upgrade-317912
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-317912 -n kubernetes-upgrade-317912: exit status 2 (243.817457ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "kubernetes-upgrade-317912" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-317912" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-317912
--- FAIL: TestKubernetesUpgrade (931.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (124.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:181: (dbg) Non-zero exit: out/minikube-linux-amd64 profile list --output=json: signal: killed (2m1.742316349s)

                                                
                                                
** stderr ** 
	E1006 14:46:45.466945  780022 status.go:458] kubeconfig endpoint: get endpoint: "NoKubernetes-419392" does not appear in /home/jenkins/minikube-integration/21701-739942/kubeconfig

                                                
                                                
** /stderr **
no_kubernetes_test.go:183: Profile list --output=json failed : "out/minikube-linux-amd64 profile list --output=json" : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestNoKubernetes/serial/ProfileList]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p NoKubernetes-419392 -n NoKubernetes-419392
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p NoKubernetes-419392 -n NoKubernetes-419392: exit status 6 (313.56784ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1006 14:48:47.210784  781694 status.go:458] kubeconfig endpoint: get endpoint: "NoKubernetes-419392" does not appear in /home/jenkins/minikube-integration/21701-739942/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestNoKubernetes/serial/ProfileList FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestNoKubernetes/serial/ProfileList]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-419392 logs -n 25
helpers_test.go:260: TestNoKubernetes/serial/ProfileList logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
	│ ssh     │ -p cilium-702246 sudo crio config                                                                                                                                                                                                                   │ cilium-702246             │ jenkins │ v1.37.0 │ 06 Oct 25 14:45 UTC │                     │
	│ delete  │ -p cilium-702246                                                                                                                                                                                                                                    │ cilium-702246             │ jenkins │ v1.37.0 │ 06 Oct 25 14:45 UTC │ 06 Oct 25 14:45 UTC │
	│ start   │ -p cert-expiration-435206 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                    │ cert-expiration-435206    │ jenkins │ v1.37.0 │ 06 Oct 25 14:45 UTC │ 06 Oct 25 14:46 UTC │
	│ start   │ -p NoKubernetes-419392 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                     │ NoKubernetes-419392       │ jenkins │ v1.37.0 │ 06 Oct 25 14:45 UTC │ 06 Oct 25 14:46 UTC │
	│ start   │ -p running-upgrade-455354 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                  │ running-upgrade-455354    │ jenkins │ v1.37.0 │ 06 Oct 25 14:45 UTC │ 06 Oct 25 14:46 UTC │
	│ ssh     │ force-systemd-flag-640885 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                                │ force-systemd-flag-640885 │ jenkins │ v1.37.0 │ 06 Oct 25 14:45 UTC │ 06 Oct 25 14:45 UTC │
	│ delete  │ -p force-systemd-flag-640885                                                                                                                                                                                                                        │ force-systemd-flag-640885 │ jenkins │ v1.37.0 │ 06 Oct 25 14:45 UTC │ 06 Oct 25 14:45 UTC │
	│ start   │ -p cert-options-809645 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ cert-options-809645       │ jenkins │ v1.37.0 │ 06 Oct 25 14:45 UTC │ 06 Oct 25 14:46 UTC │
	│ delete  │ -p NoKubernetes-419392                                                                                                                                                                                                                              │ NoKubernetes-419392       │ jenkins │ v1.37.0 │ 06 Oct 25 14:46 UTC │ 06 Oct 25 14:46 UTC │
	│ start   │ -p NoKubernetes-419392 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                     │ NoKubernetes-419392       │ jenkins │ v1.37.0 │ 06 Oct 25 14:46 UTC │ 06 Oct 25 14:46 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile running-upgrade-455354 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker                                                                                         │ running-upgrade-455354    │ jenkins │ v1.37.0 │ 06 Oct 25 14:46 UTC │                     │
	│ delete  │ -p running-upgrade-455354                                                                                                                                                                                                                           │ running-upgrade-455354    │ jenkins │ v1.37.0 │ 06 Oct 25 14:46 UTC │ 06 Oct 25 14:46 UTC │
	│ start   │ -p pause-670840 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                 │ pause-670840              │ jenkins │ v1.37.0 │ 06 Oct 25 14:46 UTC │ 06 Oct 25 14:47 UTC │
	│ ssh     │ -p NoKubernetes-419392 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-419392       │ jenkins │ v1.37.0 │ 06 Oct 25 14:46 UTC │                     │
	│ ssh     │ cert-options-809645 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-809645       │ jenkins │ v1.37.0 │ 06 Oct 25 14:46 UTC │ 06 Oct 25 14:46 UTC │
	│ ssh     │ -p cert-options-809645 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-809645       │ jenkins │ v1.37.0 │ 06 Oct 25 14:46 UTC │ 06 Oct 25 14:46 UTC │
	│ delete  │ -p cert-options-809645                                                                                                                                                                                                                              │ cert-options-809645       │ jenkins │ v1.37.0 │ 06 Oct 25 14:46 UTC │ 06 Oct 25 14:46 UTC │
	│ start   │ -p kubernetes-upgrade-317912 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                  │ kubernetes-upgrade-317912 │ jenkins │ v1.37.0 │ 06 Oct 25 14:46 UTC │ 06 Oct 25 14:47 UTC │
	│ start   │ -p pause-670840 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                          │ pause-670840              │ jenkins │ v1.37.0 │ 06 Oct 25 14:47 UTC │ 06 Oct 25 14:48 UTC │
	│ stop    │ -p kubernetes-upgrade-317912                                                                                                                                                                                                                        │ kubernetes-upgrade-317912 │ jenkins │ v1.37.0 │ 06 Oct 25 14:47 UTC │ 06 Oct 25 14:47 UTC │
	│ start   │ -p kubernetes-upgrade-317912 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                  │ kubernetes-upgrade-317912 │ jenkins │ v1.37.0 │ 06 Oct 25 14:47 UTC │ 06 Oct 25 14:48 UTC │
	│ start   │ -p kubernetes-upgrade-317912 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                         │ kubernetes-upgrade-317912 │ jenkins │ v1.37.0 │ 06 Oct 25 14:48 UTC │                     │
	│ start   │ -p kubernetes-upgrade-317912 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                  │ kubernetes-upgrade-317912 │ jenkins │ v1.37.0 │ 06 Oct 25 14:48 UTC │                     │
	│ delete  │ -p pause-670840                                                                                                                                                                                                                                     │ pause-670840              │ jenkins │ v1.37.0 │ 06 Oct 25 14:48 UTC │ 06 Oct 25 14:48 UTC │
	│ start   │ -p stopped-upgrade-216364 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                      │ stopped-upgrade-216364    │ jenkins │ v1.32.0 │ 06 Oct 25 14:48 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 14:48:23
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 14:48:23.557198  781435 out.go:296] Setting OutFile to fd 1 ...
	I1006 14:48:23.557466  781435 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 14:48:23.557470  781435 out.go:309] Setting ErrFile to fd 2...
	I1006 14:48:23.557473  781435 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 14:48:23.557691  781435 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-739942/.minikube/bin
	I1006 14:48:23.558187  781435 out.go:303] Setting JSON to false
	I1006 14:48:23.559167  781435 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":16255,"bootTime":1759745849,"procs":228,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 14:48:23.559218  781435 start.go:138] virtualization: kvm guest
	I1006 14:48:23.561665  781435 out.go:177] * [stopped-upgrade-216364] minikube v1.32.0 on Ubuntu 22.04 (kvm/amd64)
	I1006 14:48:23.563828  781435 out.go:177]   - MINIKUBE_LOCATION=21701
	I1006 14:48:23.565330  781435 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 14:48:23.563843  781435 notify.go:220] Checking for updates...
	I1006 14:48:23.566532  781435 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-739942/.minikube
	I1006 14:48:23.567647  781435 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1006 14:48:23.568923  781435 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 14:48:23.570265  781435 out.go:177]   - KUBECONFIG=/tmp/legacy_kubeconfig770894032
	I1006 14:48:23.572314  781435 config.go:182] Loaded profile config "NoKubernetes-419392": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1006 14:48:23.572461  781435 config.go:182] Loaded profile config "cert-expiration-435206": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:48:23.572598  781435 config.go:182] Loaded profile config "kubernetes-upgrade-317912": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:48:23.572729  781435 driver.go:378] Setting default libvirt URI to qemu:///system
	I1006 14:48:23.613373  781435 out.go:177] * Using the kvm2 driver based on user configuration
	I1006 14:48:23.614400  781435 start.go:298] selected driver: kvm2
	I1006 14:48:23.614410  781435 start.go:902] validating driver "kvm2" against <nil>
	I1006 14:48:23.614423  781435 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 14:48:23.615715  781435 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 14:48:23.615821  781435 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21701-739942/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1006 14:48:23.630822  781435 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1006 14:48:23.630862  781435 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1006 14:48:23.631143  781435 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1006 14:48:23.631174  781435 cni.go:84] Creating CNI manager for ""
	I1006 14:48:23.631189  781435 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1006 14:48:23.631198  781435 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1006 14:48:23.631204  781435 start_flags.go:323] config:
	{Name:stopped-upgrade-216364 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:stopped-upgrade-216364 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1006 14:48:23.631366  781435 iso.go:125] acquiring lock: {Name:mk95d3590e93d6e8355b01ba8879f9f51b8be64b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 14:48:23.633231  781435 out.go:177] * Starting control plane node stopped-upgrade-216364 in cluster stopped-upgrade-216364
	I1006 14:48:23.634744  781435 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1006 14:48:23.634778  781435 preload.go:148] Found local preload: /home/jenkins/minikube-integration/21701-739942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1006 14:48:23.634785  781435 cache.go:56] Caching tarball of preloaded images
	I1006 14:48:23.634876  781435 preload.go:174] Found /home/jenkins/minikube-integration/21701-739942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1006 14:48:23.634883  781435 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1006 14:48:23.634961  781435 profile.go:148] Saving config to /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/stopped-upgrade-216364/config.json ...
	I1006 14:48:23.634973  781435 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/stopped-upgrade-216364/config.json: {Name:mk4f94f61cc6752cbb8bee236f515ff0c9ed6030 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:48:23.635135  781435 start.go:365] acquiring machines lock for stopped-upgrade-216364: {Name:mkc5be1cfc8fcefa1839aef4c67a376cc5095e30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1006 14:48:24.443802  781435 start.go:369] acquired machines lock for "stopped-upgrade-216364" in 808.628946ms
	I1006 14:48:24.443896  781435 start.go:93] Provisioning new machine with config: &{Name:stopped-upgrade-216364 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:stopp
ed-upgrade-216364 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 14:48:24.444051  781435 start.go:125] createHost starting for "" (driver="kvm2")
	I1006 14:48:21.030857  781281 out.go:252] * Updating the running kvm2 "kubernetes-upgrade-317912" VM ...
	I1006 14:48:21.030892  781281 machine.go:93] provisionDockerMachine start ...
	I1006 14:48:21.030911  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .DriverName
	I1006 14:48:21.031144  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHHostname
	I1006 14:48:21.034195  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | domain kubernetes-upgrade-317912 has defined MAC address 52:54:00:db:d0:2e in network mk-kubernetes-upgrade-317912
	I1006 14:48:21.034829  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:d0:2e", ip: ""} in network mk-kubernetes-upgrade-317912: {Iface:virbr1 ExpiryTime:2025-10-06 15:47:59 +0000 UTC Type:0 Mac:52:54:00:db:d0:2e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:kubernetes-upgrade-317912 Clientid:01:52:54:00:db:d0:2e}
	I1006 14:48:21.034858  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | domain kubernetes-upgrade-317912 has defined IP address 192.168.39.45 and MAC address 52:54:00:db:d0:2e in network mk-kubernetes-upgrade-317912
	I1006 14:48:21.035116  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHPort
	I1006 14:48:21.035295  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHKeyPath
	I1006 14:48:21.035453  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHKeyPath
	I1006 14:48:21.035650  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHUsername
	I1006 14:48:21.035857  781281 main.go:141] libmachine: Using SSH client type: native
	I1006 14:48:21.036194  781281 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I1006 14:48:21.036210  781281 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 14:48:21.171949  781281 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-317912
	
	I1006 14:48:21.171977  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetMachineName
	I1006 14:48:21.172256  781281 buildroot.go:166] provisioning hostname "kubernetes-upgrade-317912"
	I1006 14:48:21.172296  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetMachineName
	I1006 14:48:21.172472  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHHostname
	I1006 14:48:21.176408  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | domain kubernetes-upgrade-317912 has defined MAC address 52:54:00:db:d0:2e in network mk-kubernetes-upgrade-317912
	I1006 14:48:21.176940  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:d0:2e", ip: ""} in network mk-kubernetes-upgrade-317912: {Iface:virbr1 ExpiryTime:2025-10-06 15:47:59 +0000 UTC Type:0 Mac:52:54:00:db:d0:2e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:kubernetes-upgrade-317912 Clientid:01:52:54:00:db:d0:2e}
	I1006 14:48:21.176982  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | domain kubernetes-upgrade-317912 has defined IP address 192.168.39.45 and MAC address 52:54:00:db:d0:2e in network mk-kubernetes-upgrade-317912
	I1006 14:48:21.177412  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHPort
	I1006 14:48:21.177684  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHKeyPath
	I1006 14:48:21.177896  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHKeyPath
	I1006 14:48:21.178076  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHUsername
	I1006 14:48:21.178257  781281 main.go:141] libmachine: Using SSH client type: native
	I1006 14:48:21.178545  781281 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I1006 14:48:21.178564  781281 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-317912 && echo "kubernetes-upgrade-317912" | sudo tee /etc/hostname
	I1006 14:48:21.369751  781281 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-317912
	
	I1006 14:48:21.369795  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHHostname
	I1006 14:48:21.373455  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | domain kubernetes-upgrade-317912 has defined MAC address 52:54:00:db:d0:2e in network mk-kubernetes-upgrade-317912
	I1006 14:48:21.374139  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:d0:2e", ip: ""} in network mk-kubernetes-upgrade-317912: {Iface:virbr1 ExpiryTime:2025-10-06 15:47:59 +0000 UTC Type:0 Mac:52:54:00:db:d0:2e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:kubernetes-upgrade-317912 Clientid:01:52:54:00:db:d0:2e}
	I1006 14:48:21.374181  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | domain kubernetes-upgrade-317912 has defined IP address 192.168.39.45 and MAC address 52:54:00:db:d0:2e in network mk-kubernetes-upgrade-317912
	I1006 14:48:21.374689  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHPort
	I1006 14:48:21.375002  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHKeyPath
	I1006 14:48:21.375223  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHKeyPath
	I1006 14:48:21.375407  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHUsername
	I1006 14:48:21.375625  781281 main.go:141] libmachine: Using SSH client type: native
	I1006 14:48:21.375896  781281 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I1006 14:48:21.375914  781281 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-317912' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-317912/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-317912' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 14:48:21.504875  781281 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 14:48:21.504921  781281 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21701-739942/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-739942/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-739942/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-739942/.minikube}
	I1006 14:48:21.505001  781281 buildroot.go:174] setting up certificates
	I1006 14:48:21.505018  781281 provision.go:84] configureAuth start
	I1006 14:48:21.505037  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetMachineName
	I1006 14:48:21.505368  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetIP
	I1006 14:48:21.509414  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | domain kubernetes-upgrade-317912 has defined MAC address 52:54:00:db:d0:2e in network mk-kubernetes-upgrade-317912
	I1006 14:48:21.509947  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:d0:2e", ip: ""} in network mk-kubernetes-upgrade-317912: {Iface:virbr1 ExpiryTime:2025-10-06 15:47:59 +0000 UTC Type:0 Mac:52:54:00:db:d0:2e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:kubernetes-upgrade-317912 Clientid:01:52:54:00:db:d0:2e}
	I1006 14:48:21.510019  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | domain kubernetes-upgrade-317912 has defined IP address 192.168.39.45 and MAC address 52:54:00:db:d0:2e in network mk-kubernetes-upgrade-317912
	I1006 14:48:21.510258  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHHostname
	I1006 14:48:21.513658  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | domain kubernetes-upgrade-317912 has defined MAC address 52:54:00:db:d0:2e in network mk-kubernetes-upgrade-317912
	I1006 14:48:21.514137  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:d0:2e", ip: ""} in network mk-kubernetes-upgrade-317912: {Iface:virbr1 ExpiryTime:2025-10-06 15:47:59 +0000 UTC Type:0 Mac:52:54:00:db:d0:2e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:kubernetes-upgrade-317912 Clientid:01:52:54:00:db:d0:2e}
	I1006 14:48:21.514184  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | domain kubernetes-upgrade-317912 has defined IP address 192.168.39.45 and MAC address 52:54:00:db:d0:2e in network mk-kubernetes-upgrade-317912
	I1006 14:48:21.514298  781281 provision.go:143] copyHostCerts
	I1006 14:48:21.514363  781281 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-739942/.minikube/ca.pem, removing ...
	I1006 14:48:21.514392  781281 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-739942/.minikube/ca.pem
	I1006 14:48:21.514485  781281 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-739942/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-739942/.minikube/ca.pem (1078 bytes)
	I1006 14:48:21.514701  781281 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-739942/.minikube/cert.pem, removing ...
	I1006 14:48:21.514715  781281 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-739942/.minikube/cert.pem
	I1006 14:48:21.514750  781281 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-739942/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-739942/.minikube/cert.pem (1123 bytes)
	I1006 14:48:21.514821  781281 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-739942/.minikube/key.pem, removing ...
	I1006 14:48:21.514829  781281 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-739942/.minikube/key.pem
	I1006 14:48:21.514853  781281 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-739942/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-739942/.minikube/key.pem (1679 bytes)
	I1006 14:48:21.514913  781281 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-739942/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-739942/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-739942/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-317912 san=[127.0.0.1 192.168.39.45 kubernetes-upgrade-317912 localhost minikube]
	I1006 14:48:22.246649  781281 provision.go:177] copyRemoteCerts
	I1006 14:48:22.246719  781281 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 14:48:22.246753  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHHostname
	I1006 14:48:22.250988  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | domain kubernetes-upgrade-317912 has defined MAC address 52:54:00:db:d0:2e in network mk-kubernetes-upgrade-317912
	I1006 14:48:22.471758  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:d0:2e", ip: ""} in network mk-kubernetes-upgrade-317912: {Iface:virbr1 ExpiryTime:2025-10-06 15:47:59 +0000 UTC Type:0 Mac:52:54:00:db:d0:2e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:kubernetes-upgrade-317912 Clientid:01:52:54:00:db:d0:2e}
	I1006 14:48:22.471794  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | domain kubernetes-upgrade-317912 has defined IP address 192.168.39.45 and MAC address 52:54:00:db:d0:2e in network mk-kubernetes-upgrade-317912
	I1006 14:48:22.472272  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHPort
	I1006 14:48:22.472621  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHKeyPath
	I1006 14:48:22.472857  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHUsername
	I1006 14:48:22.473093  781281 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/kubernetes-upgrade-317912/id_rsa Username:docker}
	I1006 14:48:22.599734  781281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1006 14:48:22.672701  781281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1006 14:48:22.756733  781281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1006 14:48:22.831811  781281 provision.go:87] duration metric: took 1.326772559s to configureAuth
	I1006 14:48:22.831859  781281 buildroot.go:189] setting minikube options for container-runtime
	I1006 14:48:22.832114  781281 config.go:182] Loaded profile config "kubernetes-upgrade-317912": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:48:22.832256  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHHostname
	I1006 14:48:22.836152  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | domain kubernetes-upgrade-317912 has defined MAC address 52:54:00:db:d0:2e in network mk-kubernetes-upgrade-317912
	I1006 14:48:22.836701  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:d0:2e", ip: ""} in network mk-kubernetes-upgrade-317912: {Iface:virbr1 ExpiryTime:2025-10-06 15:47:59 +0000 UTC Type:0 Mac:52:54:00:db:d0:2e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:kubernetes-upgrade-317912 Clientid:01:52:54:00:db:d0:2e}
	I1006 14:48:22.836750  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | domain kubernetes-upgrade-317912 has defined IP address 192.168.39.45 and MAC address 52:54:00:db:d0:2e in network mk-kubernetes-upgrade-317912
	I1006 14:48:22.837055  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHPort
	I1006 14:48:22.837297  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHKeyPath
	I1006 14:48:22.837492  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHKeyPath
	I1006 14:48:22.837665  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHUsername
	I1006 14:48:22.837902  781281 main.go:141] libmachine: Using SSH client type: native
	I1006 14:48:22.838120  781281 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I1006 14:48:22.838136  781281 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 14:48:23.988855  781281 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 14:48:23.988892  781281 machine.go:96] duration metric: took 2.957989013s to provisionDockerMachine
	I1006 14:48:23.988910  781281 start.go:293] postStartSetup for "kubernetes-upgrade-317912" (driver="kvm2")
	I1006 14:48:23.988954  781281 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 14:48:23.989004  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .DriverName
	I1006 14:48:23.989410  781281 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 14:48:23.989467  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHHostname
	I1006 14:48:23.993148  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | domain kubernetes-upgrade-317912 has defined MAC address 52:54:00:db:d0:2e in network mk-kubernetes-upgrade-317912
	I1006 14:48:23.993758  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:d0:2e", ip: ""} in network mk-kubernetes-upgrade-317912: {Iface:virbr1 ExpiryTime:2025-10-06 15:47:59 +0000 UTC Type:0 Mac:52:54:00:db:d0:2e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:kubernetes-upgrade-317912 Clientid:01:52:54:00:db:d0:2e}
	I1006 14:48:23.993793  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | domain kubernetes-upgrade-317912 has defined IP address 192.168.39.45 and MAC address 52:54:00:db:d0:2e in network mk-kubernetes-upgrade-317912
	I1006 14:48:23.993988  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHPort
	I1006 14:48:23.994216  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHKeyPath
	I1006 14:48:23.994393  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHUsername
	I1006 14:48:23.994649  781281 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/kubernetes-upgrade-317912/id_rsa Username:docker}
	I1006 14:48:24.129904  781281 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 14:48:24.139427  781281 info.go:137] Remote host: Buildroot 2025.02
	I1006 14:48:24.139539  781281 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-739942/.minikube/addons for local assets ...
	I1006 14:48:24.139639  781281 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-739942/.minikube/files for local assets ...
	I1006 14:48:24.139780  781281 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-739942/.minikube/files/etc/ssl/certs/7438512.pem -> 7438512.pem in /etc/ssl/certs
	I1006 14:48:24.139924  781281 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1006 14:48:24.177944  781281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/files/etc/ssl/certs/7438512.pem --> /etc/ssl/certs/7438512.pem (1708 bytes)
	I1006 14:48:24.249784  781281 start.go:296] duration metric: took 260.85658ms for postStartSetup
	I1006 14:48:24.249867  781281 fix.go:56] duration metric: took 3.240057002s for fixHost
	I1006 14:48:24.249896  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHHostname
	I1006 14:48:24.253541  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | domain kubernetes-upgrade-317912 has defined MAC address 52:54:00:db:d0:2e in network mk-kubernetes-upgrade-317912
	I1006 14:48:24.254015  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:d0:2e", ip: ""} in network mk-kubernetes-upgrade-317912: {Iface:virbr1 ExpiryTime:2025-10-06 15:47:59 +0000 UTC Type:0 Mac:52:54:00:db:d0:2e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:kubernetes-upgrade-317912 Clientid:01:52:54:00:db:d0:2e}
	I1006 14:48:24.254059  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | domain kubernetes-upgrade-317912 has defined IP address 192.168.39.45 and MAC address 52:54:00:db:d0:2e in network mk-kubernetes-upgrade-317912
	I1006 14:48:24.254287  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHPort
	I1006 14:48:24.254562  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHKeyPath
	I1006 14:48:24.254792  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHKeyPath
	I1006 14:48:24.254972  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHUsername
	I1006 14:48:24.255240  781281 main.go:141] libmachine: Using SSH client type: native
	I1006 14:48:24.255541  781281 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I1006 14:48:24.255562  781281 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1006 14:48:24.443570  781281 main.go:141] libmachine: SSH cmd err, output: <nil>: 1759762104.437038277
	
	I1006 14:48:24.443626  781281 fix.go:216] guest clock: 1759762104.437038277
	I1006 14:48:24.443637  781281 fix.go:229] Guest: 2025-10-06 14:48:24.437038277 +0000 UTC Remote: 2025-10-06 14:48:24.249873501 +0000 UTC m=+3.426269664 (delta=187.164776ms)
	I1006 14:48:24.443666  781281 fix.go:200] guest clock delta is within tolerance: 187.164776ms
	I1006 14:48:24.443672  781281 start.go:83] releasing machines lock for "kubernetes-upgrade-317912", held for 3.433910698s
	I1006 14:48:24.443703  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .DriverName
	I1006 14:48:24.444039  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetIP
	I1006 14:48:24.447844  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | domain kubernetes-upgrade-317912 has defined MAC address 52:54:00:db:d0:2e in network mk-kubernetes-upgrade-317912
	I1006 14:48:24.448323  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:d0:2e", ip: ""} in network mk-kubernetes-upgrade-317912: {Iface:virbr1 ExpiryTime:2025-10-06 15:47:59 +0000 UTC Type:0 Mac:52:54:00:db:d0:2e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:kubernetes-upgrade-317912 Clientid:01:52:54:00:db:d0:2e}
	I1006 14:48:24.448376  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | domain kubernetes-upgrade-317912 has defined IP address 192.168.39.45 and MAC address 52:54:00:db:d0:2e in network mk-kubernetes-upgrade-317912
	I1006 14:48:24.448681  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .DriverName
	I1006 14:48:24.449472  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .DriverName
	I1006 14:48:24.449729  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .DriverName
	I1006 14:48:24.449855  781281 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 14:48:24.449901  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHHostname
	I1006 14:48:24.450020  781281 ssh_runner.go:195] Run: cat /version.json
	I1006 14:48:24.450052  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHHostname
	I1006 14:48:24.453501  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | domain kubernetes-upgrade-317912 has defined MAC address 52:54:00:db:d0:2e in network mk-kubernetes-upgrade-317912
	I1006 14:48:24.453598  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | domain kubernetes-upgrade-317912 has defined MAC address 52:54:00:db:d0:2e in network mk-kubernetes-upgrade-317912
	I1006 14:48:24.454061  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:d0:2e", ip: ""} in network mk-kubernetes-upgrade-317912: {Iface:virbr1 ExpiryTime:2025-10-06 15:47:59 +0000 UTC Type:0 Mac:52:54:00:db:d0:2e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:kubernetes-upgrade-317912 Clientid:01:52:54:00:db:d0:2e}
	I1006 14:48:24.454098  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:d0:2e", ip: ""} in network mk-kubernetes-upgrade-317912: {Iface:virbr1 ExpiryTime:2025-10-06 15:47:59 +0000 UTC Type:0 Mac:52:54:00:db:d0:2e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:kubernetes-upgrade-317912 Clientid:01:52:54:00:db:d0:2e}
	I1006 14:48:24.454138  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | domain kubernetes-upgrade-317912 has defined IP address 192.168.39.45 and MAC address 52:54:00:db:d0:2e in network mk-kubernetes-upgrade-317912
	I1006 14:48:24.454155  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | domain kubernetes-upgrade-317912 has defined IP address 192.168.39.45 and MAC address 52:54:00:db:d0:2e in network mk-kubernetes-upgrade-317912
	I1006 14:48:24.454451  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHPort
	I1006 14:48:24.454664  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHPort
	I1006 14:48:24.454673  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHKeyPath
	I1006 14:48:24.454933  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHUsername
	I1006 14:48:24.454952  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHKeyPath
	I1006 14:48:24.455163  781281 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/kubernetes-upgrade-317912/id_rsa Username:docker}
	I1006 14:48:24.455223  781281 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetSSHUsername
	I1006 14:48:24.455433  781281 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/kubernetes-upgrade-317912/id_rsa Username:docker}
	I1006 14:48:24.644428  781281 ssh_runner.go:195] Run: systemctl --version
	I1006 14:48:24.661947  781281 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 14:48:24.911132  781281 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 14:48:24.935666  781281 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 14:48:24.935746  781281 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 14:48:24.965417  781281 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1006 14:48:24.965448  781281 start.go:495] detecting cgroup driver to use...
	I1006 14:48:24.965538  781281 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 14:48:25.009626  781281 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 14:48:25.047603  781281 docker.go:218] disabling cri-docker service (if available) ...
	I1006 14:48:25.047785  781281 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 14:48:25.142428  781281 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 14:48:25.187799  781281 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 14:48:25.611787  781281 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 14:48:24.446168  781435 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1006 14:48:24.446422  781435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 14:48:24.446486  781435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 14:48:24.463603  781435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45441
	I1006 14:48:24.464197  781435 main.go:141] libmachine: () Calling .GetVersion
	I1006 14:48:24.464988  781435 main.go:141] libmachine: Using API Version  1
	I1006 14:48:24.465008  781435 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 14:48:24.465413  781435 main.go:141] libmachine: () Calling .GetMachineName
	I1006 14:48:24.465671  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .GetMachineName
	I1006 14:48:24.465887  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .DriverName
	I1006 14:48:24.466093  781435 start.go:159] libmachine.API.Create for "stopped-upgrade-216364" (driver="kvm2")
	I1006 14:48:24.466144  781435 client.go:168] LocalClient.Create starting
	I1006 14:48:24.466188  781435 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21701-739942/.minikube/certs/ca.pem
	I1006 14:48:24.466245  781435 main.go:141] libmachine: Decoding PEM data...
	I1006 14:48:24.466265  781435 main.go:141] libmachine: Parsing certificate...
	I1006 14:48:24.466355  781435 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21701-739942/.minikube/certs/cert.pem
	I1006 14:48:24.466385  781435 main.go:141] libmachine: Decoding PEM data...
	I1006 14:48:24.466401  781435 main.go:141] libmachine: Parsing certificate...
	I1006 14:48:24.466426  781435 main.go:141] libmachine: Running pre-create checks...
	I1006 14:48:24.466437  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .PreCreateCheck
	I1006 14:48:24.466866  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .GetConfigRaw
	I1006 14:48:24.467423  781435 main.go:141] libmachine: Creating machine...
	I1006 14:48:24.467435  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .Create
	I1006 14:48:24.467632  781435 main.go:141] libmachine: (stopped-upgrade-216364) creating domain...
	I1006 14:48:24.467648  781435 main.go:141] libmachine: (stopped-upgrade-216364) creating network...
	I1006 14:48:24.469149  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | found existing default network
	I1006 14:48:24.469380  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | <network connections='3'>
	I1006 14:48:24.469397  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |   <name>default</name>
	I1006 14:48:24.469416  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I1006 14:48:24.469430  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |   <forward mode='nat'>
	I1006 14:48:24.469439  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |     <nat>
	I1006 14:48:24.469449  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |       <port start='1024' end='65535'/>
	I1006 14:48:24.469458  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |     </nat>
	I1006 14:48:24.469464  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |   </forward>
	I1006 14:48:24.469474  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I1006 14:48:24.469483  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I1006 14:48:24.469500  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I1006 14:48:24.469515  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |     <dhcp>
	I1006 14:48:24.469526  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I1006 14:48:24.469534  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |     </dhcp>
	I1006 14:48:24.469543  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |   </ip>
	I1006 14:48:24.469550  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | </network>
	I1006 14:48:24.469561  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | 
	I1006 14:48:24.470691  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | I1006 14:48:24.470502  781474 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:ab:1c:33} reservation:<nil>}
	I1006 14:48:24.471373  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | I1006 14:48:24.471246  781474 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:db:6e:b4} reservation:<nil>}
	I1006 14:48:24.472086  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | I1006 14:48:24.471946  781474 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:ac:e9:33} reservation:<nil>}
	I1006 14:48:24.472932  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | I1006 14:48:24.472855  781474 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000275730}
	I1006 14:48:24.473019  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | defining private network:
	I1006 14:48:24.473040  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | 
	I1006 14:48:24.473047  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | <network>
	I1006 14:48:24.473056  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |   <name>mk-stopped-upgrade-216364</name>
	I1006 14:48:24.473063  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |   <dns enable='no'/>
	I1006 14:48:24.473073  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I1006 14:48:24.473099  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |     <dhcp>
	I1006 14:48:24.473107  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I1006 14:48:24.473114  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |     </dhcp>
	I1006 14:48:24.473120  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |   </ip>
	I1006 14:48:24.473149  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | </network>
	I1006 14:48:24.473166  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | 
	I1006 14:48:24.479640  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | creating private network mk-stopped-upgrade-216364 192.168.72.0/24...
	I1006 14:48:24.568632  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | private network mk-stopped-upgrade-216364 192.168.72.0/24 created
	I1006 14:48:24.568912  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | <network>
	I1006 14:48:24.568928  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |   <name>mk-stopped-upgrade-216364</name>
	I1006 14:48:24.568948  781435 main.go:141] libmachine: (stopped-upgrade-216364) setting up store path in /home/jenkins/minikube-integration/21701-739942/.minikube/machines/stopped-upgrade-216364 ...
	I1006 14:48:24.568969  781435 main.go:141] libmachine: (stopped-upgrade-216364) building disk image from file:///home/jenkins/minikube-integration/21701-739942/.minikube/cache/iso/amd64/minikube-v1.32.1-amd64.iso
	I1006 14:48:24.568980  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |   <uuid>d0d95f68-e970-4ef5-a58b-1978a69ed6fc</uuid>
	I1006 14:48:24.568989  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |   <bridge name='virbr4' stp='on' delay='0'/>
	I1006 14:48:24.569041  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |   <mac address='52:54:00:11:37:97'/>
	I1006 14:48:24.569066  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |   <dns enable='no'/>
	I1006 14:48:24.569098  781435 main.go:141] libmachine: (stopped-upgrade-216364) Downloading /home/jenkins/minikube-integration/21701-739942/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21701-739942/.minikube/cache/iso/amd64/minikube-v1.32.1-amd64.iso...
	I1006 14:48:24.569120  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I1006 14:48:24.569134  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |     <dhcp>
	I1006 14:48:24.569143  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I1006 14:48:24.569153  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |     </dhcp>
	I1006 14:48:24.569158  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |   </ip>
	I1006 14:48:24.569163  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | </network>
	I1006 14:48:24.569170  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | 
	I1006 14:48:24.569191  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | I1006 14:48:24.568887  781474 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21701-739942/.minikube
	I1006 14:48:24.791450  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | I1006 14:48:24.791318  781474 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21701-739942/.minikube/machines/stopped-upgrade-216364/id_rsa...
	I1006 14:48:24.882940  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | I1006 14:48:24.882779  781474 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21701-739942/.minikube/machines/stopped-upgrade-216364/stopped-upgrade-216364.rawdisk...
	I1006 14:48:24.882961  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | Writing magic tar header
	I1006 14:48:24.882974  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | Writing SSH key tar header
	I1006 14:48:24.883000  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | I1006 14:48:24.882902  781474 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21701-739942/.minikube/machines/stopped-upgrade-216364 ...
	I1006 14:48:24.883011  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21701-739942/.minikube/machines/stopped-upgrade-216364
	I1006 14:48:24.883125  781435 main.go:141] libmachine: (stopped-upgrade-216364) setting executable bit set on /home/jenkins/minikube-integration/21701-739942/.minikube/machines/stopped-upgrade-216364 (perms=drwx------)
	I1006 14:48:24.883150  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21701-739942/.minikube/machines
	I1006 14:48:24.883160  781435 main.go:141] libmachine: (stopped-upgrade-216364) setting executable bit set on /home/jenkins/minikube-integration/21701-739942/.minikube/machines (perms=drwxr-xr-x)
	I1006 14:48:24.883174  781435 main.go:141] libmachine: (stopped-upgrade-216364) setting executable bit set on /home/jenkins/minikube-integration/21701-739942/.minikube (perms=drwxr-xr-x)
	I1006 14:48:24.883184  781435 main.go:141] libmachine: (stopped-upgrade-216364) setting executable bit set on /home/jenkins/minikube-integration/21701-739942 (perms=drwxrwxr-x)
	I1006 14:48:24.883196  781435 main.go:141] libmachine: (stopped-upgrade-216364) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1006 14:48:24.883205  781435 main.go:141] libmachine: (stopped-upgrade-216364) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1006 14:48:24.883216  781435 main.go:141] libmachine: (stopped-upgrade-216364) defining domain...
	I1006 14:48:24.883228  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21701-739942/.minikube
	I1006 14:48:24.883284  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21701-739942
	I1006 14:48:24.883312  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1006 14:48:24.883329  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | checking permissions on dir: /home/jenkins
	I1006 14:48:24.883344  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | checking permissions on dir: /home
	I1006 14:48:24.883363  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | skipping /home - not owner
	I1006 14:48:24.884664  781435 main.go:141] libmachine: (stopped-upgrade-216364) defining domain using XML: 
	I1006 14:48:24.884674  781435 main.go:141] libmachine: (stopped-upgrade-216364) <domain type='kvm'>
	I1006 14:48:24.884680  781435 main.go:141] libmachine: (stopped-upgrade-216364)   <name>stopped-upgrade-216364</name>
	I1006 14:48:24.884685  781435 main.go:141] libmachine: (stopped-upgrade-216364)   <memory unit='MiB'>3072</memory>
	I1006 14:48:24.884690  781435 main.go:141] libmachine: (stopped-upgrade-216364)   <vcpu>2</vcpu>
	I1006 14:48:24.884694  781435 main.go:141] libmachine: (stopped-upgrade-216364)   <features>
	I1006 14:48:24.884699  781435 main.go:141] libmachine: (stopped-upgrade-216364)     <acpi/>
	I1006 14:48:24.884703  781435 main.go:141] libmachine: (stopped-upgrade-216364)     <apic/>
	I1006 14:48:24.884709  781435 main.go:141] libmachine: (stopped-upgrade-216364)     <pae/>
	I1006 14:48:24.884722  781435 main.go:141] libmachine: (stopped-upgrade-216364)   </features>
	I1006 14:48:24.884728  781435 main.go:141] libmachine: (stopped-upgrade-216364)   <cpu mode='host-passthrough'>
	I1006 14:48:24.884732  781435 main.go:141] libmachine: (stopped-upgrade-216364)   </cpu>
	I1006 14:48:24.884737  781435 main.go:141] libmachine: (stopped-upgrade-216364)   <os>
	I1006 14:48:24.884741  781435 main.go:141] libmachine: (stopped-upgrade-216364)     <type>hvm</type>
	I1006 14:48:24.884774  781435 main.go:141] libmachine: (stopped-upgrade-216364)     <boot dev='cdrom'/>
	I1006 14:48:24.884792  781435 main.go:141] libmachine: (stopped-upgrade-216364)     <boot dev='hd'/>
	I1006 14:48:24.884803  781435 main.go:141] libmachine: (stopped-upgrade-216364)     <bootmenu enable='no'/>
	I1006 14:48:24.884821  781435 main.go:141] libmachine: (stopped-upgrade-216364)   </os>
	I1006 14:48:24.884829  781435 main.go:141] libmachine: (stopped-upgrade-216364)   <devices>
	I1006 14:48:24.884843  781435 main.go:141] libmachine: (stopped-upgrade-216364)     <disk type='file' device='cdrom'>
	I1006 14:48:24.884857  781435 main.go:141] libmachine: (stopped-upgrade-216364)       <source file='/home/jenkins/minikube-integration/21701-739942/.minikube/machines/stopped-upgrade-216364/boot2docker.iso'/>
	I1006 14:48:24.884866  781435 main.go:141] libmachine: (stopped-upgrade-216364)       <target dev='hdc' bus='scsi'/>
	I1006 14:48:24.884874  781435 main.go:141] libmachine: (stopped-upgrade-216364)       <readonly/>
	I1006 14:48:24.884881  781435 main.go:141] libmachine: (stopped-upgrade-216364)     </disk>
	I1006 14:48:24.884890  781435 main.go:141] libmachine: (stopped-upgrade-216364)     <disk type='file' device='disk'>
	I1006 14:48:24.884905  781435 main.go:141] libmachine: (stopped-upgrade-216364)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1006 14:48:24.884920  781435 main.go:141] libmachine: (stopped-upgrade-216364)       <source file='/home/jenkins/minikube-integration/21701-739942/.minikube/machines/stopped-upgrade-216364/stopped-upgrade-216364.rawdisk'/>
	I1006 14:48:24.884929  781435 main.go:141] libmachine: (stopped-upgrade-216364)       <target dev='hda' bus='virtio'/>
	I1006 14:48:24.884938  781435 main.go:141] libmachine: (stopped-upgrade-216364)     </disk>
	I1006 14:48:24.884946  781435 main.go:141] libmachine: (stopped-upgrade-216364)     <interface type='network'>
	I1006 14:48:24.884957  781435 main.go:141] libmachine: (stopped-upgrade-216364)       <source network='mk-stopped-upgrade-216364'/>
	I1006 14:48:24.884965  781435 main.go:141] libmachine: (stopped-upgrade-216364)       <model type='virtio'/>
	I1006 14:48:24.884974  781435 main.go:141] libmachine: (stopped-upgrade-216364)     </interface>
	I1006 14:48:24.884992  781435 main.go:141] libmachine: (stopped-upgrade-216364)     <interface type='network'>
	I1006 14:48:24.885003  781435 main.go:141] libmachine: (stopped-upgrade-216364)       <source network='default'/>
	I1006 14:48:24.885010  781435 main.go:141] libmachine: (stopped-upgrade-216364)       <model type='virtio'/>
	I1006 14:48:24.885017  781435 main.go:141] libmachine: (stopped-upgrade-216364)     </interface>
	I1006 14:48:24.885024  781435 main.go:141] libmachine: (stopped-upgrade-216364)     <serial type='pty'>
	I1006 14:48:24.885057  781435 main.go:141] libmachine: (stopped-upgrade-216364)       <target port='0'/>
	I1006 14:48:24.885074  781435 main.go:141] libmachine: (stopped-upgrade-216364)     </serial>
	I1006 14:48:24.885084  781435 main.go:141] libmachine: (stopped-upgrade-216364)     <console type='pty'>
	I1006 14:48:24.885093  781435 main.go:141] libmachine: (stopped-upgrade-216364)       <target type='serial' port='0'/>
	I1006 14:48:24.885101  781435 main.go:141] libmachine: (stopped-upgrade-216364)     </console>
	I1006 14:48:24.885108  781435 main.go:141] libmachine: (stopped-upgrade-216364)     <rng model='virtio'>
	I1006 14:48:24.885120  781435 main.go:141] libmachine: (stopped-upgrade-216364)       <backend model='random'>/dev/random</backend>
	I1006 14:48:24.885127  781435 main.go:141] libmachine: (stopped-upgrade-216364)     </rng>
	I1006 14:48:24.885136  781435 main.go:141] libmachine: (stopped-upgrade-216364)   </devices>
	I1006 14:48:24.885149  781435 main.go:141] libmachine: (stopped-upgrade-216364) </domain>
	I1006 14:48:24.885160  781435 main.go:141] libmachine: (stopped-upgrade-216364) 
	I1006 14:48:24.890366  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | domain stopped-upgrade-216364 has defined MAC address 52:54:00:e0:b1:88 in network default
	I1006 14:48:24.891201  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | domain stopped-upgrade-216364 has defined MAC address 52:54:00:8f:a1:9a in network mk-stopped-upgrade-216364
	I1006 14:48:24.891212  781435 main.go:141] libmachine: (stopped-upgrade-216364) starting domain...
	I1006 14:48:24.891224  781435 main.go:141] libmachine: (stopped-upgrade-216364) ensuring networks are active...
	I1006 14:48:24.892132  781435 main.go:141] libmachine: (stopped-upgrade-216364) Ensuring network default is active
	I1006 14:48:24.892621  781435 main.go:141] libmachine: (stopped-upgrade-216364) Ensuring network mk-stopped-upgrade-216364 is active
	I1006 14:48:24.893443  781435 main.go:141] libmachine: (stopped-upgrade-216364) getting domain XML...
	I1006 14:48:24.894818  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | starting domain XML:
	I1006 14:48:24.894850  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | <domain type='kvm'>
	I1006 14:48:24.894861  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |   <name>stopped-upgrade-216364</name>
	I1006 14:48:24.894871  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |   <uuid>7c341374-52dc-4376-ad02-6b08cceac8a7</uuid>
	I1006 14:48:24.894879  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |   <memory unit='KiB'>3145728</memory>
	I1006 14:48:24.894886  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I1006 14:48:24.894895  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |   <vcpu placement='static'>2</vcpu>
	I1006 14:48:24.894902  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |   <os>
	I1006 14:48:24.894911  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1006 14:48:24.894918  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |     <boot dev='cdrom'/>
	I1006 14:48:24.894950  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |     <boot dev='hd'/>
	I1006 14:48:24.894968  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |     <bootmenu enable='no'/>
	I1006 14:48:24.894979  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |   </os>
	I1006 14:48:24.894987  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |   <features>
	I1006 14:48:24.894996  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |     <acpi/>
	I1006 14:48:24.895014  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |     <apic/>
	I1006 14:48:24.895023  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |     <pae/>
	I1006 14:48:24.895030  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |   </features>
	I1006 14:48:24.895050  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1006 14:48:24.895061  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |   <clock offset='utc'/>
	I1006 14:48:24.895073  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |   <on_poweroff>destroy</on_poweroff>
	I1006 14:48:24.895082  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |   <on_reboot>restart</on_reboot>
	I1006 14:48:24.895092  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |   <on_crash>destroy</on_crash>
	I1006 14:48:24.895098  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |   <devices>
	I1006 14:48:24.895105  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1006 14:48:24.895110  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |     <disk type='file' device='cdrom'>
	I1006 14:48:24.895116  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |       <driver name='qemu' type='raw'/>
	I1006 14:48:24.895124  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |       <source file='/home/jenkins/minikube-integration/21701-739942/.minikube/machines/stopped-upgrade-216364/boot2docker.iso'/>
	I1006 14:48:24.895146  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |       <target dev='hdc' bus='scsi'/>
	I1006 14:48:24.895160  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |       <readonly/>
	I1006 14:48:24.895171  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1006 14:48:24.895179  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |     </disk>
	I1006 14:48:24.895195  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |     <disk type='file' device='disk'>
	I1006 14:48:24.895208  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1006 14:48:24.895226  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |       <source file='/home/jenkins/minikube-integration/21701-739942/.minikube/machines/stopped-upgrade-216364/stopped-upgrade-216364.rawdisk'/>
	I1006 14:48:24.895238  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |       <target dev='hda' bus='virtio'/>
	I1006 14:48:24.895260  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1006 14:48:24.895266  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |     </disk>
	I1006 14:48:24.895275  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1006 14:48:24.895285  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1006 14:48:24.895294  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |     </controller>
	I1006 14:48:24.895302  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1006 14:48:24.895315  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1006 14:48:24.895327  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1006 14:48:24.895335  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |     </controller>
	I1006 14:48:24.895345  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |     <interface type='network'>
	I1006 14:48:24.895353  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |       <mac address='52:54:00:8f:a1:9a'/>
	I1006 14:48:24.895367  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |       <source network='mk-stopped-upgrade-216364'/>
	I1006 14:48:24.895375  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |       <model type='virtio'/>
	I1006 14:48:24.895394  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1006 14:48:24.895408  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |     </interface>
	I1006 14:48:24.895419  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |     <interface type='network'>
	I1006 14:48:24.895428  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |       <mac address='52:54:00:e0:b1:88'/>
	I1006 14:48:24.895444  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |       <source network='default'/>
	I1006 14:48:24.895452  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |       <model type='virtio'/>
	I1006 14:48:24.895479  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1006 14:48:24.895492  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |     </interface>
	I1006 14:48:24.895503  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |     <serial type='pty'>
	I1006 14:48:24.895512  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |       <target type='isa-serial' port='0'>
	I1006 14:48:24.895523  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |         <model name='isa-serial'/>
	I1006 14:48:24.895531  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |       </target>
	I1006 14:48:24.895540  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |     </serial>
	I1006 14:48:24.895548  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |     <console type='pty'>
	I1006 14:48:24.895559  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |       <target type='serial' port='0'/>
	I1006 14:48:24.895570  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |     </console>
	I1006 14:48:24.895581  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |     <input type='mouse' bus='ps2'/>
	I1006 14:48:24.895602  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |     <input type='keyboard' bus='ps2'/>
	I1006 14:48:24.895611  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |     <audio id='1' type='none'/>
	I1006 14:48:24.895622  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |     <memballoon model='virtio'>
	I1006 14:48:24.895632  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1006 14:48:24.895642  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |     </memballoon>
	I1006 14:48:24.895650  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |     <rng model='virtio'>
	I1006 14:48:24.895660  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |       <backend model='random'>/dev/random</backend>
	I1006 14:48:24.895670  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1006 14:48:24.895675  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |     </rng>
	I1006 14:48:24.895679  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG |   </devices>
	I1006 14:48:24.895684  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | </domain>
	I1006 14:48:24.895688  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | 
	I1006 14:48:25.363664  781435 main.go:141] libmachine: (stopped-upgrade-216364) waiting for domain to start...
	I1006 14:48:25.365649  781435 main.go:141] libmachine: (stopped-upgrade-216364) domain is now running
	I1006 14:48:25.365665  781435 main.go:141] libmachine: (stopped-upgrade-216364) waiting for IP...
	I1006 14:48:25.366808  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | domain stopped-upgrade-216364 has defined MAC address 52:54:00:8f:a1:9a in network mk-stopped-upgrade-216364
	I1006 14:48:25.367623  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | no network interface addresses found for domain stopped-upgrade-216364 (source=lease)
	I1006 14:48:25.367645  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | trying to list again with source=arp
	I1006 14:48:25.368026  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | unable to find current IP address of domain stopped-upgrade-216364 in network mk-stopped-upgrade-216364 (interfaces detected: [])
	I1006 14:48:25.368091  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | I1006 14:48:25.368024  781474 retry.go:31] will retry after 237.911819ms: waiting for domain to come up
	I1006 14:48:25.608040  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | domain stopped-upgrade-216364 has defined MAC address 52:54:00:8f:a1:9a in network mk-stopped-upgrade-216364
	I1006 14:48:25.608835  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | no network interface addresses found for domain stopped-upgrade-216364 (source=lease)
	I1006 14:48:25.608850  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | trying to list again with source=arp
	I1006 14:48:25.609366  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | unable to find current IP address of domain stopped-upgrade-216364 in network mk-stopped-upgrade-216364 (interfaces detected: [])
	I1006 14:48:25.609384  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | I1006 14:48:25.609332  781474 retry.go:31] will retry after 273.170438ms: waiting for domain to come up
	I1006 14:48:25.884539  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | domain stopped-upgrade-216364 has defined MAC address 52:54:00:8f:a1:9a in network mk-stopped-upgrade-216364
	I1006 14:48:25.885320  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | no network interface addresses found for domain stopped-upgrade-216364 (source=lease)
	I1006 14:48:25.885335  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | trying to list again with source=arp
	I1006 14:48:25.885775  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | unable to find current IP address of domain stopped-upgrade-216364 in network mk-stopped-upgrade-216364 (interfaces detected: [])
	I1006 14:48:25.885834  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | I1006 14:48:25.885752  781474 retry.go:31] will retry after 441.124809ms: waiting for domain to come up
	I1006 14:48:26.328461  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | domain stopped-upgrade-216364 has defined MAC address 52:54:00:8f:a1:9a in network mk-stopped-upgrade-216364
	I1006 14:48:26.329152  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | no network interface addresses found for domain stopped-upgrade-216364 (source=lease)
	I1006 14:48:26.329182  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | trying to list again with source=arp
	I1006 14:48:26.329563  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | unable to find current IP address of domain stopped-upgrade-216364 in network mk-stopped-upgrade-216364 (interfaces detected: [])
	I1006 14:48:26.329598  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | I1006 14:48:26.329495  781474 retry.go:31] will retry after 485.611107ms: waiting for domain to come up
	I1006 14:48:26.816842  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | domain stopped-upgrade-216364 has defined MAC address 52:54:00:8f:a1:9a in network mk-stopped-upgrade-216364
	I1006 14:48:26.817693  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | no network interface addresses found for domain stopped-upgrade-216364 (source=lease)
	I1006 14:48:26.817709  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | trying to list again with source=arp
	I1006 14:48:26.818102  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | unable to find current IP address of domain stopped-upgrade-216364 in network mk-stopped-upgrade-216364 (interfaces detected: [])
	I1006 14:48:26.818167  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | I1006 14:48:26.818072  781474 retry.go:31] will retry after 488.543518ms: waiting for domain to come up
	I1006 14:48:27.308040  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | domain stopped-upgrade-216364 has defined MAC address 52:54:00:8f:a1:9a in network mk-stopped-upgrade-216364
	I1006 14:48:27.308665  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | no network interface addresses found for domain stopped-upgrade-216364 (source=lease)
	I1006 14:48:27.308691  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | trying to list again with source=arp
	I1006 14:48:27.309118  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | unable to find current IP address of domain stopped-upgrade-216364 in network mk-stopped-upgrade-216364 (interfaces detected: [])
	I1006 14:48:27.309143  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | I1006 14:48:27.309078  781474 retry.go:31] will retry after 946.508355ms: waiting for domain to come up
	I1006 14:48:28.257093  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | domain stopped-upgrade-216364 has defined MAC address 52:54:00:8f:a1:9a in network mk-stopped-upgrade-216364
	I1006 14:48:28.257693  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | no network interface addresses found for domain stopped-upgrade-216364 (source=lease)
	I1006 14:48:28.257718  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | trying to list again with source=arp
	I1006 14:48:28.257979  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | unable to find current IP address of domain stopped-upgrade-216364 in network mk-stopped-upgrade-216364 (interfaces detected: [])
	I1006 14:48:28.257995  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | I1006 14:48:28.257925  781474 retry.go:31] will retry after 836.263561ms: waiting for domain to come up
	I1006 14:48:25.895358  781281 docker.go:234] disabling docker service ...
	I1006 14:48:25.895452  781281 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 14:48:25.929546  781281 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 14:48:25.953038  781281 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 14:48:26.184851  781281 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 14:48:26.388894  781281 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 14:48:26.428225  781281 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 14:48:26.455713  781281 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1006 14:48:26.455780  781281 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:48:26.471571  781281 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1006 14:48:26.471662  781281 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:48:26.487740  781281 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:48:26.504412  781281 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:48:26.520694  781281 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 14:48:26.538847  781281 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:48:26.556429  781281 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:48:26.577085  781281 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:48:26.592493  781281 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 14:48:26.607223  781281 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 14:48:26.621468  781281 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:48:26.826468  781281 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 14:48:29.095785  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | domain stopped-upgrade-216364 has defined MAC address 52:54:00:8f:a1:9a in network mk-stopped-upgrade-216364
	I1006 14:48:29.096387  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | no network interface addresses found for domain stopped-upgrade-216364 (source=lease)
	I1006 14:48:29.096400  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | trying to list again with source=arp
	I1006 14:48:29.096858  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | unable to find current IP address of domain stopped-upgrade-216364 in network mk-stopped-upgrade-216364 (interfaces detected: [])
	I1006 14:48:29.096882  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | I1006 14:48:29.096824  781474 retry.go:31] will retry after 1.070450811s: waiting for domain to come up
	I1006 14:48:30.168935  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | domain stopped-upgrade-216364 has defined MAC address 52:54:00:8f:a1:9a in network mk-stopped-upgrade-216364
	I1006 14:48:30.169553  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | no network interface addresses found for domain stopped-upgrade-216364 (source=lease)
	I1006 14:48:30.169573  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | trying to list again with source=arp
	I1006 14:48:30.169910  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | unable to find current IP address of domain stopped-upgrade-216364 in network mk-stopped-upgrade-216364 (interfaces detected: [])
	I1006 14:48:30.169936  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | I1006 14:48:30.169879  781474 retry.go:31] will retry after 1.837104486s: waiting for domain to come up
	I1006 14:48:32.010107  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | domain stopped-upgrade-216364 has defined MAC address 52:54:00:8f:a1:9a in network mk-stopped-upgrade-216364
	I1006 14:48:32.010775  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | no network interface addresses found for domain stopped-upgrade-216364 (source=lease)
	I1006 14:48:32.010800  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | trying to list again with source=arp
	I1006 14:48:32.011157  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | unable to find current IP address of domain stopped-upgrade-216364 in network mk-stopped-upgrade-216364 (interfaces detected: [])
	I1006 14:48:32.011186  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | I1006 14:48:32.011109  781474 retry.go:31] will retry after 2.026836627s: waiting for domain to come up
	I1006 14:48:34.039857  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | domain stopped-upgrade-216364 has defined MAC address 52:54:00:8f:a1:9a in network mk-stopped-upgrade-216364
	I1006 14:48:34.040463  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | no network interface addresses found for domain stopped-upgrade-216364 (source=lease)
	I1006 14:48:34.040489  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | trying to list again with source=arp
	I1006 14:48:34.040772  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | unable to find current IP address of domain stopped-upgrade-216364 in network mk-stopped-upgrade-216364 (interfaces detected: [])
	I1006 14:48:34.040797  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | I1006 14:48:34.040741  781474 retry.go:31] will retry after 1.829310293s: waiting for domain to come up
	I1006 14:48:35.872287  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | domain stopped-upgrade-216364 has defined MAC address 52:54:00:8f:a1:9a in network mk-stopped-upgrade-216364
	I1006 14:48:35.872952  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | no network interface addresses found for domain stopped-upgrade-216364 (source=lease)
	I1006 14:48:35.872970  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | trying to list again with source=arp
	I1006 14:48:35.873399  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | unable to find current IP address of domain stopped-upgrade-216364 in network mk-stopped-upgrade-216364 (interfaces detected: [])
	I1006 14:48:35.873422  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | I1006 14:48:35.873367  781474 retry.go:31] will retry after 2.28211388s: waiting for domain to come up
	I1006 14:48:38.159059  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | domain stopped-upgrade-216364 has defined MAC address 52:54:00:8f:a1:9a in network mk-stopped-upgrade-216364
	I1006 14:48:38.159571  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | no network interface addresses found for domain stopped-upgrade-216364 (source=lease)
	I1006 14:48:38.159614  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | trying to list again with source=arp
	I1006 14:48:38.159927  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | unable to find current IP address of domain stopped-upgrade-216364 in network mk-stopped-upgrade-216364 (interfaces detected: [])
	I1006 14:48:38.159975  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | I1006 14:48:38.159900  781474 retry.go:31] will retry after 4.52176898s: waiting for domain to come up
	I1006 14:48:42.685189  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | domain stopped-upgrade-216364 has defined MAC address 52:54:00:8f:a1:9a in network mk-stopped-upgrade-216364
	I1006 14:48:42.685907  781435 main.go:141] libmachine: (stopped-upgrade-216364) found domain IP: 192.168.72.220
	I1006 14:48:42.685930  781435 main.go:141] libmachine: (stopped-upgrade-216364) reserving static IP address...
	I1006 14:48:42.685945  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | domain stopped-upgrade-216364 has current primary IP address 192.168.72.220 and MAC address 52:54:00:8f:a1:9a in network mk-stopped-upgrade-216364
	I1006 14:48:42.686519  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | unable to find host DHCP lease matching {name: "stopped-upgrade-216364", mac: "52:54:00:8f:a1:9a", ip: "192.168.72.220"} in network mk-stopped-upgrade-216364
	I1006 14:48:42.925547  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | Getting to WaitForSSH function...
	I1006 14:48:42.925577  781435 main.go:141] libmachine: (stopped-upgrade-216364) reserved static IP address 192.168.72.220 for domain stopped-upgrade-216364
	I1006 14:48:42.925641  781435 main.go:141] libmachine: (stopped-upgrade-216364) waiting for SSH...
	I1006 14:48:42.929357  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | domain stopped-upgrade-216364 has defined MAC address 52:54:00:8f:a1:9a in network mk-stopped-upgrade-216364
	I1006 14:48:42.929788  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:a1:9a", ip: ""} in network mk-stopped-upgrade-216364: {Iface:virbr4 ExpiryTime:2025-10-06 15:48:38 +0000 UTC Type:0 Mac:52:54:00:8f:a1:9a Iaid: IPaddr:192.168.72.220 Prefix:24 Hostname:minikube Clientid:01:52:54:00:8f:a1:9a}
	I1006 14:48:42.929880  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | domain stopped-upgrade-216364 has defined IP address 192.168.72.220 and MAC address 52:54:00:8f:a1:9a in network mk-stopped-upgrade-216364
	I1006 14:48:42.929932  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | Using SSH client type: external
	I1006 14:48:42.929956  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | Using SSH private key: /home/jenkins/minikube-integration/21701-739942/.minikube/machines/stopped-upgrade-216364/id_rsa (-rw-------)
	I1006 14:48:42.929993  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.220 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21701-739942/.minikube/machines/stopped-upgrade-216364/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1006 14:48:42.930003  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | About to run SSH command:
	I1006 14:48:42.930016  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | exit 0
	I1006 14:48:43.021533  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | SSH cmd err, output: <nil>: 
	I1006 14:48:43.021827  781435 main.go:141] libmachine: (stopped-upgrade-216364) domain creation complete
	I1006 14:48:43.022244  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .GetConfigRaw
	I1006 14:48:43.022870  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .DriverName
	I1006 14:48:43.023156  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .DriverName
	I1006 14:48:43.023392  781435 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1006 14:48:43.023406  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .GetState
	I1006 14:48:43.025307  781435 main.go:141] libmachine: Detecting operating system of created instance...
	I1006 14:48:43.025321  781435 main.go:141] libmachine: Waiting for SSH to be available...
	I1006 14:48:43.025329  781435 main.go:141] libmachine: Getting to WaitForSSH function...
	I1006 14:48:43.025342  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .GetSSHHostname
	I1006 14:48:43.028556  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | domain stopped-upgrade-216364 has defined MAC address 52:54:00:8f:a1:9a in network mk-stopped-upgrade-216364
	I1006 14:48:43.029346  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:a1:9a", ip: ""} in network mk-stopped-upgrade-216364: {Iface:virbr4 ExpiryTime:2025-10-06 15:48:38 +0000 UTC Type:0 Mac:52:54:00:8f:a1:9a Iaid: IPaddr:192.168.72.220 Prefix:24 Hostname:stopped-upgrade-216364 Clientid:01:52:54:00:8f:a1:9a}
	I1006 14:48:43.029378  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | domain stopped-upgrade-216364 has defined IP address 192.168.72.220 and MAC address 52:54:00:8f:a1:9a in network mk-stopped-upgrade-216364
	I1006 14:48:43.029615  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .GetSSHPort
	I1006 14:48:43.029801  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .GetSSHKeyPath
	I1006 14:48:43.029943  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .GetSSHKeyPath
	I1006 14:48:43.030060  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .GetSSHUsername
	I1006 14:48:43.030217  781435 main.go:141] libmachine: Using SSH client type: native
	I1006 14:48:43.030617  781435 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.72.220 22 <nil> <nil>}
	I1006 14:48:43.030624  781435 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1006 14:48:43.136926  781435 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 14:48:43.136943  781435 main.go:141] libmachine: Detecting the provisioner...
	I1006 14:48:43.136955  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .GetSSHHostname
	I1006 14:48:43.140507  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | domain stopped-upgrade-216364 has defined MAC address 52:54:00:8f:a1:9a in network mk-stopped-upgrade-216364
	I1006 14:48:43.140880  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:a1:9a", ip: ""} in network mk-stopped-upgrade-216364: {Iface:virbr4 ExpiryTime:2025-10-06 15:48:38 +0000 UTC Type:0 Mac:52:54:00:8f:a1:9a Iaid: IPaddr:192.168.72.220 Prefix:24 Hostname:stopped-upgrade-216364 Clientid:01:52:54:00:8f:a1:9a}
	I1006 14:48:43.140911  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | domain stopped-upgrade-216364 has defined IP address 192.168.72.220 and MAC address 52:54:00:8f:a1:9a in network mk-stopped-upgrade-216364
	I1006 14:48:43.141104  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .GetSSHPort
	I1006 14:48:43.141305  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .GetSSHKeyPath
	I1006 14:48:43.141497  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .GetSSHKeyPath
	I1006 14:48:43.141635  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .GetSSHUsername
	I1006 14:48:43.141786  781435 main.go:141] libmachine: Using SSH client type: native
	I1006 14:48:43.142114  781435 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.72.220 22 <nil> <nil>}
	I1006 14:48:43.142120  781435 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1006 14:48:43.251569  781435 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gb75713b-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1006 14:48:43.251674  781435 main.go:141] libmachine: found compatible host: buildroot
	I1006 14:48:43.251681  781435 main.go:141] libmachine: Provisioning with buildroot...
	I1006 14:48:43.251690  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .GetMachineName
	I1006 14:48:43.251973  781435 buildroot.go:166] provisioning hostname "stopped-upgrade-216364"
	I1006 14:48:43.251998  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .GetMachineName
	I1006 14:48:43.252198  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .GetSSHHostname
	I1006 14:48:43.255444  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | domain stopped-upgrade-216364 has defined MAC address 52:54:00:8f:a1:9a in network mk-stopped-upgrade-216364
	I1006 14:48:43.255927  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:a1:9a", ip: ""} in network mk-stopped-upgrade-216364: {Iface:virbr4 ExpiryTime:2025-10-06 15:48:38 +0000 UTC Type:0 Mac:52:54:00:8f:a1:9a Iaid: IPaddr:192.168.72.220 Prefix:24 Hostname:stopped-upgrade-216364 Clientid:01:52:54:00:8f:a1:9a}
	I1006 14:48:43.255952  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | domain stopped-upgrade-216364 has defined IP address 192.168.72.220 and MAC address 52:54:00:8f:a1:9a in network mk-stopped-upgrade-216364
	I1006 14:48:43.256162  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .GetSSHPort
	I1006 14:48:43.256409  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .GetSSHKeyPath
	I1006 14:48:43.256608  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .GetSSHKeyPath
	I1006 14:48:43.256834  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .GetSSHUsername
	I1006 14:48:43.257043  781435 main.go:141] libmachine: Using SSH client type: native
	I1006 14:48:43.257384  781435 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.72.220 22 <nil> <nil>}
	I1006 14:48:43.257392  781435 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-216364 && echo "stopped-upgrade-216364" | sudo tee /etc/hostname
	I1006 14:48:43.380113  781435 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-216364
	
	I1006 14:48:43.380136  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .GetSSHHostname
	I1006 14:48:43.383782  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | domain stopped-upgrade-216364 has defined MAC address 52:54:00:8f:a1:9a in network mk-stopped-upgrade-216364
	I1006 14:48:43.384374  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:a1:9a", ip: ""} in network mk-stopped-upgrade-216364: {Iface:virbr4 ExpiryTime:2025-10-06 15:48:38 +0000 UTC Type:0 Mac:52:54:00:8f:a1:9a Iaid: IPaddr:192.168.72.220 Prefix:24 Hostname:stopped-upgrade-216364 Clientid:01:52:54:00:8f:a1:9a}
	I1006 14:48:43.384415  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | domain stopped-upgrade-216364 has defined IP address 192.168.72.220 and MAC address 52:54:00:8f:a1:9a in network mk-stopped-upgrade-216364
	I1006 14:48:43.384749  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .GetSSHPort
	I1006 14:48:43.384988  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .GetSSHKeyPath
	I1006 14:48:43.385201  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .GetSSHKeyPath
	I1006 14:48:43.385382  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .GetSSHUsername
	I1006 14:48:43.385571  781435 main.go:141] libmachine: Using SSH client type: native
	I1006 14:48:43.386097  781435 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.72.220 22 <nil> <nil>}
	I1006 14:48:43.386110  781435 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-216364' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-216364/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-216364' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 14:48:43.507036  781435 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 14:48:43.507056  781435 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21701-739942/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-739942/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-739942/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-739942/.minikube}
	I1006 14:48:43.507090  781435 buildroot.go:174] setting up certificates
	I1006 14:48:43.507113  781435 provision.go:83] configureAuth start
	I1006 14:48:43.507122  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .GetMachineName
	I1006 14:48:43.507444  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .GetIP
	I1006 14:48:43.511231  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | domain stopped-upgrade-216364 has defined MAC address 52:54:00:8f:a1:9a in network mk-stopped-upgrade-216364
	I1006 14:48:43.511753  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:a1:9a", ip: ""} in network mk-stopped-upgrade-216364: {Iface:virbr4 ExpiryTime:2025-10-06 15:48:38 +0000 UTC Type:0 Mac:52:54:00:8f:a1:9a Iaid: IPaddr:192.168.72.220 Prefix:24 Hostname:stopped-upgrade-216364 Clientid:01:52:54:00:8f:a1:9a}
	I1006 14:48:43.511779  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | domain stopped-upgrade-216364 has defined IP address 192.168.72.220 and MAC address 52:54:00:8f:a1:9a in network mk-stopped-upgrade-216364
	I1006 14:48:43.512025  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .GetSSHHostname
	I1006 14:48:43.515521  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | domain stopped-upgrade-216364 has defined MAC address 52:54:00:8f:a1:9a in network mk-stopped-upgrade-216364
	I1006 14:48:43.515908  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:a1:9a", ip: ""} in network mk-stopped-upgrade-216364: {Iface:virbr4 ExpiryTime:2025-10-06 15:48:38 +0000 UTC Type:0 Mac:52:54:00:8f:a1:9a Iaid: IPaddr:192.168.72.220 Prefix:24 Hostname:stopped-upgrade-216364 Clientid:01:52:54:00:8f:a1:9a}
	I1006 14:48:43.515927  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | domain stopped-upgrade-216364 has defined IP address 192.168.72.220 and MAC address 52:54:00:8f:a1:9a in network mk-stopped-upgrade-216364
	I1006 14:48:43.516276  781435 provision.go:138] copyHostCerts
	I1006 14:48:43.516331  781435 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-739942/.minikube/key.pem, removing ...
	I1006 14:48:43.516348  781435 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-739942/.minikube/key.pem
	I1006 14:48:43.516412  781435 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-739942/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-739942/.minikube/key.pem (1679 bytes)
	I1006 14:48:43.516508  781435 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-739942/.minikube/ca.pem, removing ...
	I1006 14:48:43.516511  781435 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-739942/.minikube/ca.pem
	I1006 14:48:43.516537  781435 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-739942/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-739942/.minikube/ca.pem (1078 bytes)
	I1006 14:48:43.516615  781435 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-739942/.minikube/cert.pem, removing ...
	I1006 14:48:43.516620  781435 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-739942/.minikube/cert.pem
	I1006 14:48:43.516655  781435 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-739942/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-739942/.minikube/cert.pem (1123 bytes)
	I1006 14:48:43.516756  781435 provision.go:112] generating server cert: /home/jenkins/minikube-integration/21701-739942/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-739942/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-739942/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-216364 san=[192.168.72.220 192.168.72.220 localhost 127.0.0.1 minikube stopped-upgrade-216364]
	I1006 14:48:43.690972  781435 provision.go:172] copyRemoteCerts
	I1006 14:48:43.691033  781435 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 14:48:43.691059  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .GetSSHHostname
	I1006 14:48:43.694825  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | domain stopped-upgrade-216364 has defined MAC address 52:54:00:8f:a1:9a in network mk-stopped-upgrade-216364
	I1006 14:48:43.695147  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:a1:9a", ip: ""} in network mk-stopped-upgrade-216364: {Iface:virbr4 ExpiryTime:2025-10-06 15:48:38 +0000 UTC Type:0 Mac:52:54:00:8f:a1:9a Iaid: IPaddr:192.168.72.220 Prefix:24 Hostname:stopped-upgrade-216364 Clientid:01:52:54:00:8f:a1:9a}
	I1006 14:48:43.695176  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | domain stopped-upgrade-216364 has defined IP address 192.168.72.220 and MAC address 52:54:00:8f:a1:9a in network mk-stopped-upgrade-216364
	I1006 14:48:43.695413  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .GetSSHPort
	I1006 14:48:43.695636  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .GetSSHKeyPath
	I1006 14:48:43.695866  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .GetSSHUsername
	I1006 14:48:43.696084  781435 sshutil.go:53] new ssh client: &{IP:192.168.72.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/stopped-upgrade-216364/id_rsa Username:docker}
	I1006 14:48:43.778438  781435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1006 14:48:43.804705  781435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1006 14:48:43.831088  781435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1006 14:48:43.857005  781435 provision.go:86] duration metric: configureAuth took 349.88005ms
	I1006 14:48:43.857026  781435 buildroot.go:189] setting minikube options for container-runtime
	I1006 14:48:43.857211  781435 config.go:182] Loaded profile config "stopped-upgrade-216364": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1006 14:48:43.857294  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .GetSSHHostname
	I1006 14:48:43.860770  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | domain stopped-upgrade-216364 has defined MAC address 52:54:00:8f:a1:9a in network mk-stopped-upgrade-216364
	I1006 14:48:43.861213  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:a1:9a", ip: ""} in network mk-stopped-upgrade-216364: {Iface:virbr4 ExpiryTime:2025-10-06 15:48:38 +0000 UTC Type:0 Mac:52:54:00:8f:a1:9a Iaid: IPaddr:192.168.72.220 Prefix:24 Hostname:stopped-upgrade-216364 Clientid:01:52:54:00:8f:a1:9a}
	I1006 14:48:43.861238  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | domain stopped-upgrade-216364 has defined IP address 192.168.72.220 and MAC address 52:54:00:8f:a1:9a in network mk-stopped-upgrade-216364
	I1006 14:48:43.861506  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .GetSSHPort
	I1006 14:48:43.861844  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .GetSSHKeyPath
	I1006 14:48:43.862069  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .GetSSHKeyPath
	I1006 14:48:43.862269  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .GetSSHUsername
	I1006 14:48:43.862487  781435 main.go:141] libmachine: Using SSH client type: native
	I1006 14:48:43.862856  781435 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.72.220 22 <nil> <nil>}
	I1006 14:48:43.862867  781435 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 14:48:44.160838  781435 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 14:48:44.160855  781435 main.go:141] libmachine: Checking connection to Docker...
	I1006 14:48:44.160866  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .GetURL
	I1006 14:48:44.162494  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | using libvirt version 8000000
	I1006 14:48:44.165573  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | domain stopped-upgrade-216364 has defined MAC address 52:54:00:8f:a1:9a in network mk-stopped-upgrade-216364
	I1006 14:48:44.166058  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:a1:9a", ip: ""} in network mk-stopped-upgrade-216364: {Iface:virbr4 ExpiryTime:2025-10-06 15:48:38 +0000 UTC Type:0 Mac:52:54:00:8f:a1:9a Iaid: IPaddr:192.168.72.220 Prefix:24 Hostname:stopped-upgrade-216364 Clientid:01:52:54:00:8f:a1:9a}
	I1006 14:48:44.166107  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | domain stopped-upgrade-216364 has defined IP address 192.168.72.220 and MAC address 52:54:00:8f:a1:9a in network mk-stopped-upgrade-216364
	I1006 14:48:44.166345  781435 main.go:141] libmachine: Docker is up and running!
	I1006 14:48:44.166358  781435 main.go:141] libmachine: Reticulating splines...
	I1006 14:48:44.166364  781435 client.go:171] LocalClient.Create took 19.700213853s
	I1006 14:48:44.166392  781435 start.go:167] duration metric: libmachine.API.Create for "stopped-upgrade-216364" took 19.700302515s
	I1006 14:48:44.166401  781435 start.go:300] post-start starting for "stopped-upgrade-216364" (driver="kvm2")
	I1006 14:48:44.166414  781435 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 14:48:44.166434  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .DriverName
	I1006 14:48:44.166765  781435 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 14:48:44.166790  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .GetSSHHostname
	I1006 14:48:44.169875  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | domain stopped-upgrade-216364 has defined MAC address 52:54:00:8f:a1:9a in network mk-stopped-upgrade-216364
	I1006 14:48:44.170464  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:a1:9a", ip: ""} in network mk-stopped-upgrade-216364: {Iface:virbr4 ExpiryTime:2025-10-06 15:48:38 +0000 UTC Type:0 Mac:52:54:00:8f:a1:9a Iaid: IPaddr:192.168.72.220 Prefix:24 Hostname:stopped-upgrade-216364 Clientid:01:52:54:00:8f:a1:9a}
	I1006 14:48:44.170491  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | domain stopped-upgrade-216364 has defined IP address 192.168.72.220 and MAC address 52:54:00:8f:a1:9a in network mk-stopped-upgrade-216364
	I1006 14:48:44.170751  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .GetSSHPort
	I1006 14:48:44.171043  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .GetSSHKeyPath
	I1006 14:48:44.171263  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .GetSSHUsername
	I1006 14:48:44.171461  781435 sshutil.go:53] new ssh client: &{IP:192.168.72.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/stopped-upgrade-216364/id_rsa Username:docker}
	I1006 14:48:44.254335  781435 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 14:48:44.258498  781435 info.go:137] Remote host: Buildroot 2021.02.12
	I1006 14:48:44.258517  781435 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-739942/.minikube/addons for local assets ...
	I1006 14:48:44.258582  781435 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-739942/.minikube/files for local assets ...
	I1006 14:48:44.258676  781435 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-739942/.minikube/files/etc/ssl/certs/7438512.pem -> 7438512.pem in /etc/ssl/certs
	I1006 14:48:44.258759  781435 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1006 14:48:44.267638  781435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/files/etc/ssl/certs/7438512.pem --> /etc/ssl/certs/7438512.pem (1708 bytes)
	I1006 14:48:44.289894  781435 start.go:303] post-start completed in 123.476947ms
	I1006 14:48:44.289951  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .GetConfigRaw
	I1006 14:48:44.290767  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .GetIP
	I1006 14:48:44.294139  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | domain stopped-upgrade-216364 has defined MAC address 52:54:00:8f:a1:9a in network mk-stopped-upgrade-216364
	I1006 14:48:44.294530  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:a1:9a", ip: ""} in network mk-stopped-upgrade-216364: {Iface:virbr4 ExpiryTime:2025-10-06 15:48:38 +0000 UTC Type:0 Mac:52:54:00:8f:a1:9a Iaid: IPaddr:192.168.72.220 Prefix:24 Hostname:stopped-upgrade-216364 Clientid:01:52:54:00:8f:a1:9a}
	I1006 14:48:44.294556  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | domain stopped-upgrade-216364 has defined IP address 192.168.72.220 and MAC address 52:54:00:8f:a1:9a in network mk-stopped-upgrade-216364
	I1006 14:48:44.294895  781435 profile.go:148] Saving config to /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/stopped-upgrade-216364/config.json ...
	I1006 14:48:44.295138  781435 start.go:128] duration metric: createHost completed in 19.851072526s
	I1006 14:48:44.295172  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .GetSSHHostname
	I1006 14:48:44.298032  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | domain stopped-upgrade-216364 has defined MAC address 52:54:00:8f:a1:9a in network mk-stopped-upgrade-216364
	I1006 14:48:44.298408  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:a1:9a", ip: ""} in network mk-stopped-upgrade-216364: {Iface:virbr4 ExpiryTime:2025-10-06 15:48:38 +0000 UTC Type:0 Mac:52:54:00:8f:a1:9a Iaid: IPaddr:192.168.72.220 Prefix:24 Hostname:stopped-upgrade-216364 Clientid:01:52:54:00:8f:a1:9a}
	I1006 14:48:44.298426  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | domain stopped-upgrade-216364 has defined IP address 192.168.72.220 and MAC address 52:54:00:8f:a1:9a in network mk-stopped-upgrade-216364
	I1006 14:48:44.298684  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .GetSSHPort
	I1006 14:48:44.298938  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .GetSSHKeyPath
	I1006 14:48:44.299123  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .GetSSHKeyPath
	I1006 14:48:44.299287  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .GetSSHUsername
	I1006 14:48:44.299435  781435 main.go:141] libmachine: Using SSH client type: native
	I1006 14:48:44.299769  781435 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.72.220 22 <nil> <nil>}
	I1006 14:48:44.299775  781435 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1006 14:48:44.407571  781435 main.go:141] libmachine: SSH cmd err, output: <nil>: 1759762124.381235341
	
	I1006 14:48:44.407583  781435 fix.go:206] guest clock: 1759762124.381235341
	I1006 14:48:44.407597  781435 fix.go:219] Guest: 2025-10-06 14:48:44.381235341 +0000 UTC Remote: 2025-10-06 14:48:44.295159427 +0000 UTC m=+20.798535813 (delta=86.075914ms)
	I1006 14:48:44.407617  781435 fix.go:190] guest clock delta is within tolerance: 86.075914ms
	I1006 14:48:44.407621  781435 start.go:83] releasing machines lock for "stopped-upgrade-216364", held for 19.963775221s
	I1006 14:48:44.407640  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .DriverName
	I1006 14:48:44.407912  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .GetIP
	I1006 14:48:44.411654  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | domain stopped-upgrade-216364 has defined MAC address 52:54:00:8f:a1:9a in network mk-stopped-upgrade-216364
	I1006 14:48:44.412094  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:a1:9a", ip: ""} in network mk-stopped-upgrade-216364: {Iface:virbr4 ExpiryTime:2025-10-06 15:48:38 +0000 UTC Type:0 Mac:52:54:00:8f:a1:9a Iaid: IPaddr:192.168.72.220 Prefix:24 Hostname:stopped-upgrade-216364 Clientid:01:52:54:00:8f:a1:9a}
	I1006 14:48:44.412110  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | domain stopped-upgrade-216364 has defined IP address 192.168.72.220 and MAC address 52:54:00:8f:a1:9a in network mk-stopped-upgrade-216364
	I1006 14:48:44.412313  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .DriverName
	I1006 14:48:44.412834  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .DriverName
	I1006 14:48:44.413044  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .DriverName
	I1006 14:48:44.413150  781435 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 14:48:44.413188  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .GetSSHHostname
	I1006 14:48:44.413282  781435 ssh_runner.go:195] Run: cat /version.json
	I1006 14:48:44.413304  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .GetSSHHostname
	I1006 14:48:44.416705  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | domain stopped-upgrade-216364 has defined MAC address 52:54:00:8f:a1:9a in network mk-stopped-upgrade-216364
	I1006 14:48:44.416986  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | domain stopped-upgrade-216364 has defined MAC address 52:54:00:8f:a1:9a in network mk-stopped-upgrade-216364
	I1006 14:48:44.417168  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:a1:9a", ip: ""} in network mk-stopped-upgrade-216364: {Iface:virbr4 ExpiryTime:2025-10-06 15:48:38 +0000 UTC Type:0 Mac:52:54:00:8f:a1:9a Iaid: IPaddr:192.168.72.220 Prefix:24 Hostname:stopped-upgrade-216364 Clientid:01:52:54:00:8f:a1:9a}
	I1006 14:48:44.417194  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | domain stopped-upgrade-216364 has defined IP address 192.168.72.220 and MAC address 52:54:00:8f:a1:9a in network mk-stopped-upgrade-216364
	I1006 14:48:44.417392  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .GetSSHPort
	I1006 14:48:44.417515  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:a1:9a", ip: ""} in network mk-stopped-upgrade-216364: {Iface:virbr4 ExpiryTime:2025-10-06 15:48:38 +0000 UTC Type:0 Mac:52:54:00:8f:a1:9a Iaid: IPaddr:192.168.72.220 Prefix:24 Hostname:stopped-upgrade-216364 Clientid:01:52:54:00:8f:a1:9a}
	I1006 14:48:44.417535  781435 main.go:141] libmachine: (stopped-upgrade-216364) DBG | domain stopped-upgrade-216364 has defined IP address 192.168.72.220 and MAC address 52:54:00:8f:a1:9a in network mk-stopped-upgrade-216364
	I1006 14:48:44.417550  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .GetSSHKeyPath
	I1006 14:48:44.417740  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .GetSSHPort
	I1006 14:48:44.417750  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .GetSSHUsername
	I1006 14:48:44.417962  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .GetSSHKeyPath
	I1006 14:48:44.417959  781435 sshutil.go:53] new ssh client: &{IP:192.168.72.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/stopped-upgrade-216364/id_rsa Username:docker}
	I1006 14:48:44.418128  781435 main.go:141] libmachine: (stopped-upgrade-216364) Calling .GetSSHUsername
	I1006 14:48:44.418263  781435 sshutil.go:53] new ssh client: &{IP:192.168.72.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/stopped-upgrade-216364/id_rsa Username:docker}
	I1006 14:48:44.517826  781435 ssh_runner.go:195] Run: systemctl --version
	I1006 14:48:44.523430  781435 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 14:48:44.689427  781435 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 14:48:44.695222  781435 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 14:48:44.695297  781435 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 14:48:44.711200  781435 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1006 14:48:44.711222  781435 start.go:472] detecting cgroup driver to use...
	I1006 14:48:44.711355  781435 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 14:48:44.728575  781435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 14:48:44.742520  781435 docker.go:203] disabling cri-docker service (if available) ...
	I1006 14:48:44.742577  781435 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 14:48:44.758530  781435 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 14:48:44.772801  781435 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 14:48:44.877311  781435 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 14:48:44.989235  781435 docker.go:219] disabling docker service ...
	I1006 14:48:44.989307  781435 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 14:48:45.003915  781435 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 14:48:45.017404  781435 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 14:48:45.132569  781435 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 14:48:45.253226  781435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 14:48:45.266399  781435 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 14:48:45.284179  781435 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1006 14:48:45.284247  781435 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:48:45.294334  781435 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1006 14:48:45.294407  781435 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:48:45.304240  781435 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:48:45.314063  781435 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:48:45.324523  781435 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 14:48:45.335706  781435 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 14:48:45.344731  781435 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1006 14:48:45.344783  781435 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1006 14:48:45.357852  781435 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 14:48:45.366772  781435 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:48:45.483669  781435 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 14:48:45.664943  781435 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 14:48:45.665008  781435 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 14:48:45.671074  781435 start.go:540] Will wait 60s for crictl version
	I1006 14:48:45.671126  781435 ssh_runner.go:195] Run: which crictl
	I1006 14:48:45.674817  781435 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1006 14:48:45.717603  781435 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1006 14:48:45.717703  781435 ssh_runner.go:195] Run: crio --version
	I1006 14:48:45.763086  781435 ssh_runner.go:195] Run: crio --version
	I1006 14:48:45.819286  781435 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	
	
	==> CRI-O <==
	Oct 06 14:48:47 NoKubernetes-419392 crio[814]: time="2025-10-06 14:48:47.710778672Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759762127710740112,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:4096,},InodesUsed:&UInt64Value{Value:2,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aa87b0b3-6b06-4e8f-a261-9257e575bef0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 06 14:48:47 NoKubernetes-419392 crio[814]: time="2025-10-06 14:48:47.711961887Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dd80f82b-b27f-4e9b-b13f-ac846635fc20 name=/runtime.v1.RuntimeService/ListContainers
	Oct 06 14:48:47 NoKubernetes-419392 crio[814]: time="2025-10-06 14:48:47.712087424Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dd80f82b-b27f-4e9b-b13f-ac846635fc20 name=/runtime.v1.RuntimeService/ListContainers
	Oct 06 14:48:47 NoKubernetes-419392 crio[814]: time="2025-10-06 14:48:47.712159087Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=dd80f82b-b27f-4e9b-b13f-ac846635fc20 name=/runtime.v1.RuntimeService/ListContainers
	Oct 06 14:48:47 NoKubernetes-419392 crio[814]: time="2025-10-06 14:48:47.754018021Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a2a1160d-16b8-481d-903a-e044e1df13e5 name=/runtime.v1.RuntimeService/Version
	Oct 06 14:48:47 NoKubernetes-419392 crio[814]: time="2025-10-06 14:48:47.754098108Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a2a1160d-16b8-481d-903a-e044e1df13e5 name=/runtime.v1.RuntimeService/Version
	Oct 06 14:48:47 NoKubernetes-419392 crio[814]: time="2025-10-06 14:48:47.755768116Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6e8099d3-024d-4550-b780-0d292977e312 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 06 14:48:47 NoKubernetes-419392 crio[814]: time="2025-10-06 14:48:47.756027103Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759762127755993714,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:4096,},InodesUsed:&UInt64Value{Value:2,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6e8099d3-024d-4550-b780-0d292977e312 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 06 14:48:47 NoKubernetes-419392 crio[814]: time="2025-10-06 14:48:47.756720839Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9e3efd09-4cf3-4e87-aaad-9606a474875f name=/runtime.v1.RuntimeService/ListContainers
	Oct 06 14:48:47 NoKubernetes-419392 crio[814]: time="2025-10-06 14:48:47.756898804Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9e3efd09-4cf3-4e87-aaad-9606a474875f name=/runtime.v1.RuntimeService/ListContainers
	Oct 06 14:48:47 NoKubernetes-419392 crio[814]: time="2025-10-06 14:48:47.756981949Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=9e3efd09-4cf3-4e87-aaad-9606a474875f name=/runtime.v1.RuntimeService/ListContainers
	Oct 06 14:48:47 NoKubernetes-419392 crio[814]: time="2025-10-06 14:48:47.796387945Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=59b40ed5-3e09-4f45-b66e-e1e494bd315c name=/runtime.v1.RuntimeService/Version
	Oct 06 14:48:47 NoKubernetes-419392 crio[814]: time="2025-10-06 14:48:47.796486667Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=59b40ed5-3e09-4f45-b66e-e1e494bd315c name=/runtime.v1.RuntimeService/Version
	Oct 06 14:48:47 NoKubernetes-419392 crio[814]: time="2025-10-06 14:48:47.798656115Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=96058268-43a2-4adb-b756-ffb79a01426d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 06 14:48:47 NoKubernetes-419392 crio[814]: time="2025-10-06 14:48:47.798808194Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759762127798786710,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:4096,},InodesUsed:&UInt64Value{Value:2,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=96058268-43a2-4adb-b756-ffb79a01426d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 06 14:48:47 NoKubernetes-419392 crio[814]: time="2025-10-06 14:48:47.799570817Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=53aa0878-4027-47ea-ab33-9bb2c0e86efe name=/runtime.v1.RuntimeService/ListContainers
	Oct 06 14:48:47 NoKubernetes-419392 crio[814]: time="2025-10-06 14:48:47.799648084Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=53aa0878-4027-47ea-ab33-9bb2c0e86efe name=/runtime.v1.RuntimeService/ListContainers
	Oct 06 14:48:47 NoKubernetes-419392 crio[814]: time="2025-10-06 14:48:47.799688223Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=53aa0878-4027-47ea-ab33-9bb2c0e86efe name=/runtime.v1.RuntimeService/ListContainers
	Oct 06 14:48:47 NoKubernetes-419392 crio[814]: time="2025-10-06 14:48:47.845352453Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c21a16e7-5956-4fe5-b9cf-03312bbb5ef7 name=/runtime.v1.RuntimeService/Version
	Oct 06 14:48:47 NoKubernetes-419392 crio[814]: time="2025-10-06 14:48:47.845418067Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c21a16e7-5956-4fe5-b9cf-03312bbb5ef7 name=/runtime.v1.RuntimeService/Version
	Oct 06 14:48:47 NoKubernetes-419392 crio[814]: time="2025-10-06 14:48:47.847011726Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=70922095-cde9-469c-97c7-4c19ae360f81 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 06 14:48:47 NoKubernetes-419392 crio[814]: time="2025-10-06 14:48:47.847585689Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759762127847467657,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:4096,},InodesUsed:&UInt64Value{Value:2,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=70922095-cde9-469c-97c7-4c19ae360f81 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 06 14:48:47 NoKubernetes-419392 crio[814]: time="2025-10-06 14:48:47.848301184Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9d9d3d34-d066-48fd-82ac-b0fa37f1ee85 name=/runtime.v1.RuntimeService/ListContainers
	Oct 06 14:48:47 NoKubernetes-419392 crio[814]: time="2025-10-06 14:48:47.848381294Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9d9d3d34-d066-48fd-82ac-b0fa37f1ee85 name=/runtime.v1.RuntimeService/ListContainers
	Oct 06 14:48:47 NoKubernetes-419392 crio[814]: time="2025-10-06 14:48:47.848415755Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=9d9d3d34-d066-48fd-82ac-b0fa37f1ee85 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v0.0.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v0.0.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	sudo: /var/lib/minikube/binaries/v0.0.0/kubectl: command not found
	
	
	==> dmesg <==
	[Oct 6 14:46] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000055] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.005082] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.218975] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000024] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.103750] kauditd_printk_skb: 1 callbacks suppressed
	[Oct 6 14:47] kauditd_printk_skb: 74 callbacks suppressed
	
	
	==> kernel <==
	 14:48:48 up 2 min,  0 users,  load average: 0.06, 0.07, 0.03
	Linux NoKubernetes-419392 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kubelet <==
	-- No entries --
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p NoKubernetes-419392 -n NoKubernetes-419392
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p NoKubernetes-419392 -n NoKubernetes-419392: exit status 6 (281.527811ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1006 14:48:48.359649  781746 status.go:458] kubeconfig endpoint: get endpoint: "NoKubernetes-419392" does not appear in /home/jenkins/minikube-integration/21701-739942/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "NoKubernetes-419392" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestNoKubernetes/serial/ProfileList (124.03s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (42.09s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-670840 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-670840 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (37.28774919s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-670840] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21701
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21701-739942/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-739942/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-670840" primary control-plane node in "pause-670840" cluster
	* Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-670840" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 14:47:40.068956  780645 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:47:40.069220  780645 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:47:40.069237  780645 out.go:374] Setting ErrFile to fd 2...
	I1006 14:47:40.069241  780645 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:47:40.069462  780645 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-739942/.minikube/bin
	I1006 14:47:40.070018  780645 out.go:368] Setting JSON to false
	I1006 14:47:40.071035  780645 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":16211,"bootTime":1759745849,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 14:47:40.071146  780645 start.go:140] virtualization: kvm guest
	I1006 14:47:40.073166  780645 out.go:179] * [pause-670840] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1006 14:47:40.074900  780645 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 14:47:40.074915  780645 notify.go:220] Checking for updates...
	I1006 14:47:40.077282  780645 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 14:47:40.078465  780645 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-739942/kubeconfig
	I1006 14:47:40.079748  780645 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-739942/.minikube
	I1006 14:47:40.081222  780645 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1006 14:47:40.083057  780645 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 14:47:40.084784  780645 config.go:182] Loaded profile config "pause-670840": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:47:40.085185  780645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 14:47:40.085255  780645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 14:47:40.100708  780645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44063
	I1006 14:47:40.101329  780645 main.go:141] libmachine: () Calling .GetVersion
	I1006 14:47:40.102016  780645 main.go:141] libmachine: Using API Version  1
	I1006 14:47:40.102050  780645 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 14:47:40.102555  780645 main.go:141] libmachine: () Calling .GetMachineName
	I1006 14:47:40.102823  780645 main.go:141] libmachine: (pause-670840) Calling .DriverName
	I1006 14:47:40.103163  780645 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 14:47:40.103657  780645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 14:47:40.103747  780645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 14:47:40.118508  780645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38647
	I1006 14:47:40.119015  780645 main.go:141] libmachine: () Calling .GetVersion
	I1006 14:47:40.119550  780645 main.go:141] libmachine: Using API Version  1
	I1006 14:47:40.119566  780645 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 14:47:40.119914  780645 main.go:141] libmachine: () Calling .GetMachineName
	I1006 14:47:40.120132  780645 main.go:141] libmachine: (pause-670840) Calling .DriverName
	I1006 14:47:40.156695  780645 out.go:179] * Using the kvm2 driver based on existing profile
	I1006 14:47:40.157843  780645 start.go:304] selected driver: kvm2
	I1006 14:47:40.157862  780645 start.go:924] validating driver "kvm2" against &{Name:pause-670840 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.1 ClusterName:pause-670840 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.41 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-instal
ler:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:47:40.158052  780645 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 14:47:40.158451  780645 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 14:47:40.158543  780645 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21701-739942/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1006 14:47:40.175128  780645 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1006 14:47:40.175169  780645 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21701-739942/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1006 14:47:40.189998  780645 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1006 14:47:40.190820  780645 cni.go:84] Creating CNI manager for ""
	I1006 14:47:40.190890  780645 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1006 14:47:40.190990  780645 start.go:348] cluster config:
	{Name:pause-670840 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-670840 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.41 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false p
ortainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:47:40.191210  780645 iso.go:125] acquiring lock: {Name:mk8de6812bb58933af0bc6eb1d955bf118a3bcec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 14:47:40.193101  780645 out.go:179] * Starting "pause-670840" primary control-plane node in "pause-670840" cluster
	I1006 14:47:40.194374  780645 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:47:40.194419  780645 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-739942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1006 14:47:40.194428  780645 cache.go:58] Caching tarball of preloaded images
	I1006 14:47:40.194511  780645 preload.go:233] Found /home/jenkins/minikube-integration/21701-739942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1006 14:47:40.194522  780645 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1006 14:47:40.194664  780645 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/pause-670840/config.json ...
	I1006 14:47:40.194871  780645 start.go:360] acquireMachinesLock for pause-670840: {Name:mkc5be1cfc8fcefa1839aef4c67a376cc5095e30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1006 14:47:40.194921  780645 start.go:364] duration metric: took 30.694µs to acquireMachinesLock for "pause-670840"
	I1006 14:47:40.194963  780645 start.go:96] Skipping create...Using existing machine configuration
	I1006 14:47:40.194971  780645 fix.go:54] fixHost starting: 
	I1006 14:47:40.195251  780645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 14:47:40.195303  780645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 14:47:40.211296  780645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36365
	I1006 14:47:40.212032  780645 main.go:141] libmachine: () Calling .GetVersion
	I1006 14:47:40.212615  780645 main.go:141] libmachine: Using API Version  1
	I1006 14:47:40.212643  780645 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 14:47:40.213076  780645 main.go:141] libmachine: () Calling .GetMachineName
	I1006 14:47:40.213301  780645 main.go:141] libmachine: (pause-670840) Calling .DriverName
	I1006 14:47:40.213546  780645 main.go:141] libmachine: (pause-670840) Calling .GetState
	I1006 14:47:40.215526  780645 fix.go:112] recreateIfNeeded on pause-670840: state=Running err=<nil>
	W1006 14:47:40.215566  780645 fix.go:138] unexpected machine state, will restart: <nil>
	I1006 14:47:40.217459  780645 out.go:252] * Updating the running kvm2 "pause-670840" VM ...
	I1006 14:47:40.217488  780645 machine.go:93] provisionDockerMachine start ...
	I1006 14:47:40.217501  780645 main.go:141] libmachine: (pause-670840) Calling .DriverName
	I1006 14:47:40.217746  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHHostname
	I1006 14:47:40.221113  780645 main.go:141] libmachine: (pause-670840) DBG | domain pause-670840 has defined MAC address 52:54:00:db:88:ba in network mk-pause-670840
	I1006 14:47:40.221649  780645 main.go:141] libmachine: (pause-670840) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:88:ba", ip: ""} in network mk-pause-670840: {Iface:virbr4 ExpiryTime:2025-10-06 15:46:57 +0000 UTC Type:0 Mac:52:54:00:db:88:ba Iaid: IPaddr:192.168.72.41 Prefix:24 Hostname:pause-670840 Clientid:01:52:54:00:db:88:ba}
	I1006 14:47:40.221680  780645 main.go:141] libmachine: (pause-670840) DBG | domain pause-670840 has defined IP address 192.168.72.41 and MAC address 52:54:00:db:88:ba in network mk-pause-670840
	I1006 14:47:40.221851  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHPort
	I1006 14:47:40.222090  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHKeyPath
	I1006 14:47:40.222331  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHKeyPath
	I1006 14:47:40.222495  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHUsername
	I1006 14:47:40.222728  780645 main.go:141] libmachine: Using SSH client type: native
	I1006 14:47:40.223027  780645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.72.41 22 <nil> <nil>}
	I1006 14:47:40.223041  780645 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 14:47:40.340960  780645 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-670840
	
	I1006 14:47:40.340999  780645 main.go:141] libmachine: (pause-670840) Calling .GetMachineName
	I1006 14:47:40.341322  780645 buildroot.go:166] provisioning hostname "pause-670840"
	I1006 14:47:40.341355  780645 main.go:141] libmachine: (pause-670840) Calling .GetMachineName
	I1006 14:47:40.341642  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHHostname
	I1006 14:47:40.345221  780645 main.go:141] libmachine: (pause-670840) DBG | domain pause-670840 has defined MAC address 52:54:00:db:88:ba in network mk-pause-670840
	I1006 14:47:40.345695  780645 main.go:141] libmachine: (pause-670840) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:88:ba", ip: ""} in network mk-pause-670840: {Iface:virbr4 ExpiryTime:2025-10-06 15:46:57 +0000 UTC Type:0 Mac:52:54:00:db:88:ba Iaid: IPaddr:192.168.72.41 Prefix:24 Hostname:pause-670840 Clientid:01:52:54:00:db:88:ba}
	I1006 14:47:40.345726  780645 main.go:141] libmachine: (pause-670840) DBG | domain pause-670840 has defined IP address 192.168.72.41 and MAC address 52:54:00:db:88:ba in network mk-pause-670840
	I1006 14:47:40.345901  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHPort
	I1006 14:47:40.346129  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHKeyPath
	I1006 14:47:40.346321  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHKeyPath
	I1006 14:47:40.346512  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHUsername
	I1006 14:47:40.346769  780645 main.go:141] libmachine: Using SSH client type: native
	I1006 14:47:40.347014  780645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.72.41 22 <nil> <nil>}
	I1006 14:47:40.347029  780645 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-670840 && echo "pause-670840" | sudo tee /etc/hostname
	I1006 14:47:40.480173  780645 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-670840
	
	I1006 14:47:40.480217  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHHostname
	I1006 14:47:40.483468  780645 main.go:141] libmachine: (pause-670840) DBG | domain pause-670840 has defined MAC address 52:54:00:db:88:ba in network mk-pause-670840
	I1006 14:47:40.484033  780645 main.go:141] libmachine: (pause-670840) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:88:ba", ip: ""} in network mk-pause-670840: {Iface:virbr4 ExpiryTime:2025-10-06 15:46:57 +0000 UTC Type:0 Mac:52:54:00:db:88:ba Iaid: IPaddr:192.168.72.41 Prefix:24 Hostname:pause-670840 Clientid:01:52:54:00:db:88:ba}
	I1006 14:47:40.484065  780645 main.go:141] libmachine: (pause-670840) DBG | domain pause-670840 has defined IP address 192.168.72.41 and MAC address 52:54:00:db:88:ba in network mk-pause-670840
	I1006 14:47:40.484346  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHPort
	I1006 14:47:40.484561  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHKeyPath
	I1006 14:47:40.484789  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHKeyPath
	I1006 14:47:40.484975  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHUsername
	I1006 14:47:40.485200  780645 main.go:141] libmachine: Using SSH client type: native
	I1006 14:47:40.485409  780645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.72.41 22 <nil> <nil>}
	I1006 14:47:40.485424  780645 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-670840' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-670840/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-670840' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 14:47:40.609044  780645 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 14:47:40.609081  780645 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21701-739942/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-739942/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-739942/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-739942/.minikube}
	I1006 14:47:40.609127  780645 buildroot.go:174] setting up certificates
	I1006 14:47:40.609144  780645 provision.go:84] configureAuth start
	I1006 14:47:40.609162  780645 main.go:141] libmachine: (pause-670840) Calling .GetMachineName
	I1006 14:47:40.609518  780645 main.go:141] libmachine: (pause-670840) Calling .GetIP
	I1006 14:47:40.613100  780645 main.go:141] libmachine: (pause-670840) DBG | domain pause-670840 has defined MAC address 52:54:00:db:88:ba in network mk-pause-670840
	I1006 14:47:40.613583  780645 main.go:141] libmachine: (pause-670840) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:88:ba", ip: ""} in network mk-pause-670840: {Iface:virbr4 ExpiryTime:2025-10-06 15:46:57 +0000 UTC Type:0 Mac:52:54:00:db:88:ba Iaid: IPaddr:192.168.72.41 Prefix:24 Hostname:pause-670840 Clientid:01:52:54:00:db:88:ba}
	I1006 14:47:40.613642  780645 main.go:141] libmachine: (pause-670840) DBG | domain pause-670840 has defined IP address 192.168.72.41 and MAC address 52:54:00:db:88:ba in network mk-pause-670840
	I1006 14:47:40.613827  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHHostname
	I1006 14:47:40.617327  780645 main.go:141] libmachine: (pause-670840) DBG | domain pause-670840 has defined MAC address 52:54:00:db:88:ba in network mk-pause-670840
	I1006 14:47:40.617892  780645 main.go:141] libmachine: (pause-670840) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:88:ba", ip: ""} in network mk-pause-670840: {Iface:virbr4 ExpiryTime:2025-10-06 15:46:57 +0000 UTC Type:0 Mac:52:54:00:db:88:ba Iaid: IPaddr:192.168.72.41 Prefix:24 Hostname:pause-670840 Clientid:01:52:54:00:db:88:ba}
	I1006 14:47:40.617926  780645 main.go:141] libmachine: (pause-670840) DBG | domain pause-670840 has defined IP address 192.168.72.41 and MAC address 52:54:00:db:88:ba in network mk-pause-670840
	I1006 14:47:40.618136  780645 provision.go:143] copyHostCerts
	I1006 14:47:40.618211  780645 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-739942/.minikube/ca.pem, removing ...
	I1006 14:47:40.618238  780645 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-739942/.minikube/ca.pem
	I1006 14:47:40.618315  780645 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-739942/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-739942/.minikube/ca.pem (1078 bytes)
	I1006 14:47:40.618469  780645 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-739942/.minikube/cert.pem, removing ...
	I1006 14:47:40.618481  780645 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-739942/.minikube/cert.pem
	I1006 14:47:40.618508  780645 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-739942/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-739942/.minikube/cert.pem (1123 bytes)
	I1006 14:47:40.618575  780645 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-739942/.minikube/key.pem, removing ...
	I1006 14:47:40.618583  780645 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-739942/.minikube/key.pem
	I1006 14:47:40.618622  780645 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-739942/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-739942/.minikube/key.pem (1679 bytes)
	I1006 14:47:40.618681  780645 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-739942/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-739942/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-739942/.minikube/certs/ca-key.pem org=jenkins.pause-670840 san=[127.0.0.1 192.168.72.41 localhost minikube pause-670840]
	I1006 14:47:40.699546  780645 provision.go:177] copyRemoteCerts
	I1006 14:47:40.699641  780645 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 14:47:40.699680  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHHostname
	I1006 14:47:40.703693  780645 main.go:141] libmachine: (pause-670840) DBG | domain pause-670840 has defined MAC address 52:54:00:db:88:ba in network mk-pause-670840
	I1006 14:47:40.704105  780645 main.go:141] libmachine: (pause-670840) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:88:ba", ip: ""} in network mk-pause-670840: {Iface:virbr4 ExpiryTime:2025-10-06 15:46:57 +0000 UTC Type:0 Mac:52:54:00:db:88:ba Iaid: IPaddr:192.168.72.41 Prefix:24 Hostname:pause-670840 Clientid:01:52:54:00:db:88:ba}
	I1006 14:47:40.704136  780645 main.go:141] libmachine: (pause-670840) DBG | domain pause-670840 has defined IP address 192.168.72.41 and MAC address 52:54:00:db:88:ba in network mk-pause-670840
	I1006 14:47:40.704442  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHPort
	I1006 14:47:40.704678  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHKeyPath
	I1006 14:47:40.704872  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHUsername
	I1006 14:47:40.705039  780645 sshutil.go:53] new ssh client: &{IP:192.168.72.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/pause-670840/id_rsa Username:docker}
	I1006 14:47:40.807288  780645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1006 14:47:40.852887  780645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1006 14:47:40.894766  780645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1006 14:47:40.932479  780645 provision.go:87] duration metric: took 323.288447ms to configureAuth
	I1006 14:47:40.932523  780645 buildroot.go:189] setting minikube options for container-runtime
	I1006 14:47:40.932770  780645 config.go:182] Loaded profile config "pause-670840": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:47:40.932844  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHHostname
	I1006 14:47:40.936902  780645 main.go:141] libmachine: (pause-670840) DBG | domain pause-670840 has defined MAC address 52:54:00:db:88:ba in network mk-pause-670840
	I1006 14:47:40.937392  780645 main.go:141] libmachine: (pause-670840) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:88:ba", ip: ""} in network mk-pause-670840: {Iface:virbr4 ExpiryTime:2025-10-06 15:46:57 +0000 UTC Type:0 Mac:52:54:00:db:88:ba Iaid: IPaddr:192.168.72.41 Prefix:24 Hostname:pause-670840 Clientid:01:52:54:00:db:88:ba}
	I1006 14:47:40.937424  780645 main.go:141] libmachine: (pause-670840) DBG | domain pause-670840 has defined IP address 192.168.72.41 and MAC address 52:54:00:db:88:ba in network mk-pause-670840
	I1006 14:47:40.937722  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHPort
	I1006 14:47:40.937984  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHKeyPath
	I1006 14:47:40.938191  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHKeyPath
	I1006 14:47:40.938398  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHUsername
	I1006 14:47:40.938611  780645 main.go:141] libmachine: Using SSH client type: native
	I1006 14:47:40.938883  780645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.72.41 22 <nil> <nil>}
	I1006 14:47:40.938900  780645 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 14:47:46.551047  780645 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 14:47:46.551085  780645 machine.go:96] duration metric: took 6.333587657s to provisionDockerMachine
	I1006 14:47:46.551103  780645 start.go:293] postStartSetup for "pause-670840" (driver="kvm2")
	I1006 14:47:46.551119  780645 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 14:47:46.551144  780645 main.go:141] libmachine: (pause-670840) Calling .DriverName
	I1006 14:47:46.551574  780645 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 14:47:46.551630  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHHostname
	I1006 14:47:46.555376  780645 main.go:141] libmachine: (pause-670840) DBG | domain pause-670840 has defined MAC address 52:54:00:db:88:ba in network mk-pause-670840
	I1006 14:47:46.555943  780645 main.go:141] libmachine: (pause-670840) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:88:ba", ip: ""} in network mk-pause-670840: {Iface:virbr4 ExpiryTime:2025-10-06 15:46:57 +0000 UTC Type:0 Mac:52:54:00:db:88:ba Iaid: IPaddr:192.168.72.41 Prefix:24 Hostname:pause-670840 Clientid:01:52:54:00:db:88:ba}
	I1006 14:47:46.555973  780645 main.go:141] libmachine: (pause-670840) DBG | domain pause-670840 has defined IP address 192.168.72.41 and MAC address 52:54:00:db:88:ba in network mk-pause-670840
	I1006 14:47:46.556269  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHPort
	I1006 14:47:46.556548  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHKeyPath
	I1006 14:47:46.556771  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHUsername
	I1006 14:47:46.557011  780645 sshutil.go:53] new ssh client: &{IP:192.168.72.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/pause-670840/id_rsa Username:docker}
	I1006 14:47:46.648395  780645 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 14:47:46.654450  780645 info.go:137] Remote host: Buildroot 2025.02
	I1006 14:47:46.654480  780645 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-739942/.minikube/addons for local assets ...
	I1006 14:47:46.654558  780645 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-739942/.minikube/files for local assets ...
	I1006 14:47:46.654672  780645 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-739942/.minikube/files/etc/ssl/certs/7438512.pem -> 7438512.pem in /etc/ssl/certs
	I1006 14:47:46.654806  780645 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1006 14:47:46.668862  780645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/files/etc/ssl/certs/7438512.pem --> /etc/ssl/certs/7438512.pem (1708 bytes)
	I1006 14:47:46.702103  780645 start.go:296] duration metric: took 150.97698ms for postStartSetup
	I1006 14:47:46.702163  780645 fix.go:56] duration metric: took 6.507190638s for fixHost
	I1006 14:47:46.702191  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHHostname
	I1006 14:47:46.705511  780645 main.go:141] libmachine: (pause-670840) DBG | domain pause-670840 has defined MAC address 52:54:00:db:88:ba in network mk-pause-670840
	I1006 14:47:46.705982  780645 main.go:141] libmachine: (pause-670840) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:88:ba", ip: ""} in network mk-pause-670840: {Iface:virbr4 ExpiryTime:2025-10-06 15:46:57 +0000 UTC Type:0 Mac:52:54:00:db:88:ba Iaid: IPaddr:192.168.72.41 Prefix:24 Hostname:pause-670840 Clientid:01:52:54:00:db:88:ba}
	I1006 14:47:46.706038  780645 main.go:141] libmachine: (pause-670840) DBG | domain pause-670840 has defined IP address 192.168.72.41 and MAC address 52:54:00:db:88:ba in network mk-pause-670840
	I1006 14:47:46.706329  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHPort
	I1006 14:47:46.706561  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHKeyPath
	I1006 14:47:46.706785  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHKeyPath
	I1006 14:47:46.706994  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHUsername
	I1006 14:47:46.707213  780645 main.go:141] libmachine: Using SSH client type: native
	I1006 14:47:46.707476  780645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.72.41 22 <nil> <nil>}
	I1006 14:47:46.707489  780645 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1006 14:47:46.824788  780645 main.go:141] libmachine: SSH cmd err, output: <nil>: 1759762066.822490500
	
	I1006 14:47:46.824817  780645 fix.go:216] guest clock: 1759762066.822490500
	I1006 14:47:46.824828  780645 fix.go:229] Guest: 2025-10-06 14:47:46.8224905 +0000 UTC Remote: 2025-10-06 14:47:46.702169037 +0000 UTC m=+6.684926291 (delta=120.321463ms)
	I1006 14:47:46.824855  780645 fix.go:200] guest clock delta is within tolerance: 120.321463ms
	I1006 14:47:46.824861  780645 start.go:83] releasing machines lock for "pause-670840", held for 6.629929566s
	I1006 14:47:46.824885  780645 main.go:141] libmachine: (pause-670840) Calling .DriverName
	I1006 14:47:46.825267  780645 main.go:141] libmachine: (pause-670840) Calling .GetIP
	I1006 14:47:46.828900  780645 main.go:141] libmachine: (pause-670840) DBG | domain pause-670840 has defined MAC address 52:54:00:db:88:ba in network mk-pause-670840
	I1006 14:47:46.829416  780645 main.go:141] libmachine: (pause-670840) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:88:ba", ip: ""} in network mk-pause-670840: {Iface:virbr4 ExpiryTime:2025-10-06 15:46:57 +0000 UTC Type:0 Mac:52:54:00:db:88:ba Iaid: IPaddr:192.168.72.41 Prefix:24 Hostname:pause-670840 Clientid:01:52:54:00:db:88:ba}
	I1006 14:47:46.829445  780645 main.go:141] libmachine: (pause-670840) DBG | domain pause-670840 has defined IP address 192.168.72.41 and MAC address 52:54:00:db:88:ba in network mk-pause-670840
	I1006 14:47:46.829693  780645 main.go:141] libmachine: (pause-670840) Calling .DriverName
	I1006 14:47:46.830413  780645 main.go:141] libmachine: (pause-670840) Calling .DriverName
	I1006 14:47:46.830662  780645 main.go:141] libmachine: (pause-670840) Calling .DriverName
	I1006 14:47:46.830796  780645 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 14:47:46.830856  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHHostname
	I1006 14:47:46.830919  780645 ssh_runner.go:195] Run: cat /version.json
	I1006 14:47:46.830938  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHHostname
	I1006 14:47:46.834756  780645 main.go:141] libmachine: (pause-670840) DBG | domain pause-670840 has defined MAC address 52:54:00:db:88:ba in network mk-pause-670840
	I1006 14:47:46.834891  780645 main.go:141] libmachine: (pause-670840) DBG | domain pause-670840 has defined MAC address 52:54:00:db:88:ba in network mk-pause-670840
	I1006 14:47:46.835244  780645 main.go:141] libmachine: (pause-670840) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:88:ba", ip: ""} in network mk-pause-670840: {Iface:virbr4 ExpiryTime:2025-10-06 15:46:57 +0000 UTC Type:0 Mac:52:54:00:db:88:ba Iaid: IPaddr:192.168.72.41 Prefix:24 Hostname:pause-670840 Clientid:01:52:54:00:db:88:ba}
	I1006 14:47:46.835280  780645 main.go:141] libmachine: (pause-670840) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:88:ba", ip: ""} in network mk-pause-670840: {Iface:virbr4 ExpiryTime:2025-10-06 15:46:57 +0000 UTC Type:0 Mac:52:54:00:db:88:ba Iaid: IPaddr:192.168.72.41 Prefix:24 Hostname:pause-670840 Clientid:01:52:54:00:db:88:ba}
	I1006 14:47:46.835321  780645 main.go:141] libmachine: (pause-670840) DBG | domain pause-670840 has defined IP address 192.168.72.41 and MAC address 52:54:00:db:88:ba in network mk-pause-670840
	I1006 14:47:46.835337  780645 main.go:141] libmachine: (pause-670840) DBG | domain pause-670840 has defined IP address 192.168.72.41 and MAC address 52:54:00:db:88:ba in network mk-pause-670840
	I1006 14:47:46.835553  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHPort
	I1006 14:47:46.835728  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHPort
	I1006 14:47:46.835818  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHKeyPath
	I1006 14:47:46.835900  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHKeyPath
	I1006 14:47:46.835986  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHUsername
	I1006 14:47:46.836058  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHUsername
	I1006 14:47:46.836290  780645 sshutil.go:53] new ssh client: &{IP:192.168.72.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/pause-670840/id_rsa Username:docker}
	I1006 14:47:46.836304  780645 sshutil.go:53] new ssh client: &{IP:192.168.72.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/pause-670840/id_rsa Username:docker}
	I1006 14:47:46.922445  780645 ssh_runner.go:195] Run: systemctl --version
	I1006 14:47:46.957004  780645 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 14:47:47.116951  780645 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 14:47:47.125508  780645 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 14:47:47.125631  780645 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 14:47:47.138220  780645 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1006 14:47:47.138256  780645 start.go:495] detecting cgroup driver to use...
	I1006 14:47:47.138351  780645 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 14:47:47.162172  780645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 14:47:47.182890  780645 docker.go:218] disabling cri-docker service (if available) ...
	I1006 14:47:47.182957  780645 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 14:47:47.208827  780645 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 14:47:47.229853  780645 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 14:47:47.446718  780645 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 14:47:47.656628  780645 docker.go:234] disabling docker service ...
	I1006 14:47:47.656734  780645 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 14:47:47.690034  780645 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 14:47:47.712894  780645 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 14:47:47.943787  780645 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 14:47:48.125561  780645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 14:47:48.147227  780645 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 14:47:48.176124  780645 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1006 14:47:48.176194  780645 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:47:48.192411  780645 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1006 14:47:48.192508  780645 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:47:48.208512  780645 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:47:48.223634  780645 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:47:48.239630  780645 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 14:47:48.256462  780645 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:47:48.281556  780645 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:47:48.352417  780645 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:47:48.387837  780645 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 14:47:48.418584  780645 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 14:47:48.457937  780645 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:47:48.804638  780645 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 14:47:49.513216  780645 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 14:47:49.513314  780645 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 14:47:49.519615  780645 start.go:563] Will wait 60s for crictl version
	I1006 14:47:49.519711  780645 ssh_runner.go:195] Run: which crictl
	I1006 14:47:49.524999  780645 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1006 14:47:49.566718  780645 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1006 14:47:49.566834  780645 ssh_runner.go:195] Run: crio --version
	I1006 14:47:49.601887  780645 ssh_runner.go:195] Run: crio --version
	I1006 14:47:49.645889  780645 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1006 14:47:49.647376  780645 main.go:141] libmachine: (pause-670840) Calling .GetIP
	I1006 14:47:49.650826  780645 main.go:141] libmachine: (pause-670840) DBG | domain pause-670840 has defined MAC address 52:54:00:db:88:ba in network mk-pause-670840
	I1006 14:47:49.651277  780645 main.go:141] libmachine: (pause-670840) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:88:ba", ip: ""} in network mk-pause-670840: {Iface:virbr4 ExpiryTime:2025-10-06 15:46:57 +0000 UTC Type:0 Mac:52:54:00:db:88:ba Iaid: IPaddr:192.168.72.41 Prefix:24 Hostname:pause-670840 Clientid:01:52:54:00:db:88:ba}
	I1006 14:47:49.651304  780645 main.go:141] libmachine: (pause-670840) DBG | domain pause-670840 has defined IP address 192.168.72.41 and MAC address 52:54:00:db:88:ba in network mk-pause-670840
	I1006 14:47:49.651721  780645 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1006 14:47:49.657780  780645 kubeadm.go:883] updating cluster {Name:pause-670840 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1
ClusterName:pause-670840 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.41 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidi
a-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 14:47:49.657948  780645 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:47:49.658041  780645 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 14:47:49.716165  780645 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 14:47:49.716200  780645 crio.go:433] Images already preloaded, skipping extraction
	I1006 14:47:49.716266  780645 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 14:47:49.767788  780645 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 14:47:49.767813  780645 cache_images.go:85] Images are preloaded, skipping loading
	I1006 14:47:49.767821  780645 kubeadm.go:934] updating node { 192.168.72.41 8443 v1.34.1 crio true true} ...
	I1006 14:47:49.767996  780645 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-670840 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.41
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-670840 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 14:47:49.768090  780645 ssh_runner.go:195] Run: crio config
	I1006 14:47:49.824306  780645 cni.go:84] Creating CNI manager for ""
	I1006 14:47:49.824344  780645 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1006 14:47:49.824384  780645 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 14:47:49.824424  780645 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.41 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-670840 NodeName:pause-670840 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.41"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.41 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 14:47:49.824678  780645 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.41
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-670840"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.41"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.41"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 14:47:49.824797  780645 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 14:47:49.839381  780645 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 14:47:49.839470  780645 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 14:47:49.855692  780645 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1006 14:47:49.880958  780645 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 14:47:49.907128  780645 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1006 14:47:49.933706  780645 ssh_runner.go:195] Run: grep 192.168.72.41	control-plane.minikube.internal$ /etc/hosts
	I1006 14:47:49.940022  780645 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:47:50.111937  780645 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 14:47:50.132196  780645 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/pause-670840 for IP: 192.168.72.41
	I1006 14:47:50.132221  780645 certs.go:195] generating shared ca certs ...
	I1006 14:47:50.132237  780645 certs.go:227] acquiring lock for ca certs: {Name:mkac26b60e1fd10143a5d4dc5ca0de64e9dd4f98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:47:50.132434  780645 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-739942/.minikube/ca.key
	I1006 14:47:50.132497  780645 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-739942/.minikube/proxy-client-ca.key
	I1006 14:47:50.132508  780645 certs.go:257] generating profile certs ...
	I1006 14:47:50.132640  780645 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/pause-670840/client.key
	I1006 14:47:50.132730  780645 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/pause-670840/apiserver.key.24981bcd
	I1006 14:47:50.132788  780645 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/pause-670840/proxy-client.key
	I1006 14:47:50.132958  780645 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-739942/.minikube/certs/743851.pem (1338 bytes)
	W1006 14:47:50.132989  780645 certs.go:480] ignoring /home/jenkins/minikube-integration/21701-739942/.minikube/certs/743851_empty.pem, impossibly tiny 0 bytes
	I1006 14:47:50.132997  780645 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-739942/.minikube/certs/ca-key.pem (1679 bytes)
	I1006 14:47:50.133023  780645 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-739942/.minikube/certs/ca.pem (1078 bytes)
	I1006 14:47:50.133052  780645 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-739942/.minikube/certs/cert.pem (1123 bytes)
	I1006 14:47:50.133084  780645 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-739942/.minikube/certs/key.pem (1679 bytes)
	I1006 14:47:50.133135  780645 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-739942/.minikube/files/etc/ssl/certs/7438512.pem (1708 bytes)
	I1006 14:47:50.133955  780645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 14:47:50.169976  780645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1006 14:47:50.205090  780645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 14:47:50.241959  780645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1006 14:47:50.277867  780645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/pause-670840/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1006 14:47:50.310839  780645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/pause-670840/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1006 14:47:50.352120  780645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/pause-670840/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 14:47:50.388700  780645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/pause-670840/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1006 14:47:50.501773  780645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/files/etc/ssl/certs/7438512.pem --> /usr/share/ca-certificates/7438512.pem (1708 bytes)
	I1006 14:47:50.564110  780645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 14:47:50.661959  780645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/certs/743851.pem --> /usr/share/ca-certificates/743851.pem (1338 bytes)
	I1006 14:47:50.754090  780645 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 14:47:50.809344  780645 ssh_runner.go:195] Run: openssl version
	I1006 14:47:50.821382  780645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 14:47:50.846163  780645 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:47:50.856996  780645 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 13:50 /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:47:50.857078  780645 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:47:50.874879  780645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 14:47:50.899354  780645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/743851.pem && ln -fs /usr/share/ca-certificates/743851.pem /etc/ssl/certs/743851.pem"
	I1006 14:47:50.920272  780645 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/743851.pem
	I1006 14:47:50.930864  780645 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  6 13:59 /usr/share/ca-certificates/743851.pem
	I1006 14:47:50.930957  780645 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/743851.pem
	I1006 14:47:50.945398  780645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/743851.pem /etc/ssl/certs/51391683.0"
	I1006 14:47:50.967680  780645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7438512.pem && ln -fs /usr/share/ca-certificates/7438512.pem /etc/ssl/certs/7438512.pem"
	I1006 14:47:51.000269  780645 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7438512.pem
	I1006 14:47:51.009946  780645 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  6 13:59 /usr/share/ca-certificates/7438512.pem
	I1006 14:47:51.010040  780645 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7438512.pem
	I1006 14:47:51.021415  780645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7438512.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 14:47:51.036430  780645 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 14:47:51.048362  780645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1006 14:47:51.059802  780645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1006 14:47:51.074755  780645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1006 14:47:51.097221  780645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1006 14:47:51.111515  780645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1006 14:47:51.126076  780645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1006 14:47:51.136261  780645 kubeadm.go:400] StartCluster: {Name:pause-670840 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Cl
usterName:pause-670840 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.41 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-g
pu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:47:51.136437  780645 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 14:47:51.136534  780645 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 14:47:51.213622  780645 cri.go:89] found id: "24692b5695875a0e07b8044479544bd940fa12fb399ecbcfcb42c79741c24af1"
	I1006 14:47:51.213657  780645 cri.go:89] found id: "6f1f73cc4a476e62f0bd839c947f2ab1c2014a3e06e2060ee96869e039e1c125"
	I1006 14:47:51.213664  780645 cri.go:89] found id: "8aeec49d42a25a681a66edb73e04dc51bdd60cc474a3560ed674b9a0c9ba6dc7"
	I1006 14:47:51.213670  780645 cri.go:89] found id: "0dd1035d820529039269bde549155f111bc74b5b3b5019542983cf0d262d42f9"
	I1006 14:47:51.213675  780645 cri.go:89] found id: "f0ca3c7483e87d53990c734308aedb63f5b38fb6a25bfd03e22d7dec5a050cfb"
	I1006 14:47:51.213680  780645 cri.go:89] found id: "3b13529d4e4132c35fa76f6df0347178f1c6dc37e51ffc1fd1f6cd6c4d317d1e"
	I1006 14:47:51.213685  780645 cri.go:89] found id: "97e6c2494ec684f14cf5a4ab45bd825b7029c38692f2aabffd6254a6b52403a8"
	I1006 14:47:51.213689  780645 cri.go:89] found id: ""
	I1006 14:47:51.213757  780645 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-670840 -n pause-670840
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-670840 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-670840 logs -n 25: (1.657625492s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
	│ ssh     │ -p cilium-702246 sudo containerd config dump                                                                                                                                                                                                        │ cilium-702246             │ jenkins │ v1.37.0 │ 06 Oct 25 14:45 UTC │                     │
	│ ssh     │ -p cilium-702246 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-702246             │ jenkins │ v1.37.0 │ 06 Oct 25 14:45 UTC │                     │
	│ ssh     │ -p cilium-702246 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-702246             │ jenkins │ v1.37.0 │ 06 Oct 25 14:45 UTC │                     │
	│ ssh     │ -p cilium-702246 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-702246             │ jenkins │ v1.37.0 │ 06 Oct 25 14:45 UTC │                     │
	│ ssh     │ -p cilium-702246 sudo crio config                                                                                                                                                                                                                   │ cilium-702246             │ jenkins │ v1.37.0 │ 06 Oct 25 14:45 UTC │                     │
	│ delete  │ -p cilium-702246                                                                                                                                                                                                                                    │ cilium-702246             │ jenkins │ v1.37.0 │ 06 Oct 25 14:45 UTC │ 06 Oct 25 14:45 UTC │
	│ start   │ -p cert-expiration-435206 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                    │ cert-expiration-435206    │ jenkins │ v1.37.0 │ 06 Oct 25 14:45 UTC │ 06 Oct 25 14:46 UTC │
	│ start   │ -p NoKubernetes-419392 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                     │ NoKubernetes-419392       │ jenkins │ v1.37.0 │ 06 Oct 25 14:45 UTC │ 06 Oct 25 14:46 UTC │
	│ start   │ -p running-upgrade-455354 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                  │ running-upgrade-455354    │ jenkins │ v1.37.0 │ 06 Oct 25 14:45 UTC │ 06 Oct 25 14:46 UTC │
	│ ssh     │ force-systemd-flag-640885 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                                │ force-systemd-flag-640885 │ jenkins │ v1.37.0 │ 06 Oct 25 14:45 UTC │ 06 Oct 25 14:45 UTC │
	│ delete  │ -p force-systemd-flag-640885                                                                                                                                                                                                                        │ force-systemd-flag-640885 │ jenkins │ v1.37.0 │ 06 Oct 25 14:45 UTC │ 06 Oct 25 14:45 UTC │
	│ start   │ -p cert-options-809645 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ cert-options-809645       │ jenkins │ v1.37.0 │ 06 Oct 25 14:45 UTC │ 06 Oct 25 14:46 UTC │
	│ delete  │ -p NoKubernetes-419392                                                                                                                                                                                                                              │ NoKubernetes-419392       │ jenkins │ v1.37.0 │ 06 Oct 25 14:46 UTC │ 06 Oct 25 14:46 UTC │
	│ start   │ -p NoKubernetes-419392 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                     │ NoKubernetes-419392       │ jenkins │ v1.37.0 │ 06 Oct 25 14:46 UTC │ 06 Oct 25 14:46 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile running-upgrade-455354 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker                                                                                         │ running-upgrade-455354    │ jenkins │ v1.37.0 │ 06 Oct 25 14:46 UTC │                     │
	│ delete  │ -p running-upgrade-455354                                                                                                                                                                                                                           │ running-upgrade-455354    │ jenkins │ v1.37.0 │ 06 Oct 25 14:46 UTC │ 06 Oct 25 14:46 UTC │
	│ start   │ -p pause-670840 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                 │ pause-670840              │ jenkins │ v1.37.0 │ 06 Oct 25 14:46 UTC │ 06 Oct 25 14:47 UTC │
	│ ssh     │ -p NoKubernetes-419392 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-419392       │ jenkins │ v1.37.0 │ 06 Oct 25 14:46 UTC │                     │
	│ ssh     │ cert-options-809645 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-809645       │ jenkins │ v1.37.0 │ 06 Oct 25 14:46 UTC │ 06 Oct 25 14:46 UTC │
	│ ssh     │ -p cert-options-809645 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-809645       │ jenkins │ v1.37.0 │ 06 Oct 25 14:46 UTC │ 06 Oct 25 14:46 UTC │
	│ delete  │ -p cert-options-809645                                                                                                                                                                                                                              │ cert-options-809645       │ jenkins │ v1.37.0 │ 06 Oct 25 14:46 UTC │ 06 Oct 25 14:46 UTC │
	│ start   │ -p kubernetes-upgrade-317912 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                  │ kubernetes-upgrade-317912 │ jenkins │ v1.37.0 │ 06 Oct 25 14:46 UTC │ 06 Oct 25 14:47 UTC │
	│ start   │ -p pause-670840 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                          │ pause-670840              │ jenkins │ v1.37.0 │ 06 Oct 25 14:47 UTC │ 06 Oct 25 14:48 UTC │
	│ stop    │ -p kubernetes-upgrade-317912                                                                                                                                                                                                                        │ kubernetes-upgrade-317912 │ jenkins │ v1.37.0 │ 06 Oct 25 14:47 UTC │ 06 Oct 25 14:47 UTC │
	│ start   │ -p kubernetes-upgrade-317912 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                  │ kubernetes-upgrade-317912 │ jenkins │ v1.37.0 │ 06 Oct 25 14:47 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 14:47:47
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 14:47:47.325966  780799 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:47:47.326078  780799 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:47:47.326086  780799 out.go:374] Setting ErrFile to fd 2...
	I1006 14:47:47.326090  780799 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:47:47.326322  780799 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-739942/.minikube/bin
	I1006 14:47:47.326761  780799 out.go:368] Setting JSON to false
	I1006 14:47:47.327720  780799 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":16218,"bootTime":1759745849,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 14:47:47.327842  780799 start.go:140] virtualization: kvm guest
	I1006 14:47:47.329989  780799 out.go:179] * [kubernetes-upgrade-317912] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1006 14:47:47.331419  780799 notify.go:220] Checking for updates...
	I1006 14:47:47.331423  780799 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 14:47:47.332911  780799 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 14:47:47.334439  780799 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-739942/kubeconfig
	I1006 14:47:47.335671  780799 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-739942/.minikube
	I1006 14:47:47.336940  780799 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1006 14:47:47.338577  780799 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 14:47:47.340444  780799 config.go:182] Loaded profile config "kubernetes-upgrade-317912": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1006 14:47:47.340887  780799 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 14:47:47.340947  780799 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 14:47:47.355245  780799 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37429
	I1006 14:47:47.356084  780799 main.go:141] libmachine: () Calling .GetVersion
	I1006 14:47:47.356743  780799 main.go:141] libmachine: Using API Version  1
	I1006 14:47:47.356771  780799 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 14:47:47.357223  780799 main.go:141] libmachine: () Calling .GetMachineName
	I1006 14:47:47.357461  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .DriverName
	I1006 14:47:47.357793  780799 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 14:47:47.358248  780799 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 14:47:47.358307  780799 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 14:47:47.373396  780799 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39091
	I1006 14:47:47.374059  780799 main.go:141] libmachine: () Calling .GetVersion
	I1006 14:47:47.374725  780799 main.go:141] libmachine: Using API Version  1
	I1006 14:47:47.374753  780799 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 14:47:47.375180  780799 main.go:141] libmachine: () Calling .GetMachineName
	I1006 14:47:47.375449  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .DriverName
	I1006 14:47:47.411792  780799 out.go:179] * Using the kvm2 driver based on existing profile
	I1006 14:47:47.413037  780799 start.go:304] selected driver: kvm2
	I1006 14:47:47.413055  780799 start.go:924] validating driver "kvm2" against &{Name:kubernetes-upgrade-317912 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-317912 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.45 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:47:47.413163  780799 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 14:47:47.413893  780799 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 14:47:47.413986  780799 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21701-739942/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1006 14:47:47.428777  780799 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1006 14:47:47.428825  780799 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21701-739942/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1006 14:47:47.444577  780799 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1006 14:47:47.445161  780799 cni.go:84] Creating CNI manager for ""
	I1006 14:47:47.445240  780799 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1006 14:47:47.445295  780799 start.go:348] cluster config:
	{Name:kubernetes-upgrade-317912 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-317912 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.45 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:47:47.445446  780799 iso.go:125] acquiring lock: {Name:mk8de6812bb58933af0bc6eb1d955bf118a3bcec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 14:47:47.447625  780799 out.go:179] * Starting "kubernetes-upgrade-317912" primary control-plane node in "kubernetes-upgrade-317912" cluster
	I1006 14:47:47.448844  780799 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:47:47.448897  780799 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-739942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1006 14:47:47.448909  780799 cache.go:58] Caching tarball of preloaded images
	I1006 14:47:47.449005  780799 preload.go:233] Found /home/jenkins/minikube-integration/21701-739942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1006 14:47:47.449030  780799 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1006 14:47:47.449149  780799 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/kubernetes-upgrade-317912/config.json ...
	I1006 14:47:47.449364  780799 start.go:360] acquireMachinesLock for kubernetes-upgrade-317912: {Name:mkc5be1cfc8fcefa1839aef4c67a376cc5095e30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1006 14:47:47.449413  780799 start.go:364] duration metric: took 28.032µs to acquireMachinesLock for "kubernetes-upgrade-317912"
	I1006 14:47:47.449428  780799 start.go:96] Skipping create...Using existing machine configuration
	I1006 14:47:47.449433  780799 fix.go:54] fixHost starting: 
	I1006 14:47:47.449711  780799 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 14:47:47.449746  780799 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 14:47:47.463542  780799 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37861
	I1006 14:47:47.464207  780799 main.go:141] libmachine: () Calling .GetVersion
	I1006 14:47:47.464841  780799 main.go:141] libmachine: Using API Version  1
	I1006 14:47:47.464870  780799 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 14:47:47.465358  780799 main.go:141] libmachine: () Calling .GetMachineName
	I1006 14:47:47.465613  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .DriverName
	I1006 14:47:47.465815  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetState
	I1006 14:47:47.468111  780799 fix.go:112] recreateIfNeeded on kubernetes-upgrade-317912: state=Stopped err=<nil>
	I1006 14:47:47.468139  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .DriverName
	W1006 14:47:47.468340  780799 fix.go:138] unexpected machine state, will restart: <nil>
	I1006 14:47:46.551047  780645 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 14:47:46.551085  780645 machine.go:96] duration metric: took 6.333587657s to provisionDockerMachine
	I1006 14:47:46.551103  780645 start.go:293] postStartSetup for "pause-670840" (driver="kvm2")
	I1006 14:47:46.551119  780645 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 14:47:46.551144  780645 main.go:141] libmachine: (pause-670840) Calling .DriverName
	I1006 14:47:46.551574  780645 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 14:47:46.551630  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHHostname
	I1006 14:47:46.555376  780645 main.go:141] libmachine: (pause-670840) DBG | domain pause-670840 has defined MAC address 52:54:00:db:88:ba in network mk-pause-670840
	I1006 14:47:46.555943  780645 main.go:141] libmachine: (pause-670840) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:88:ba", ip: ""} in network mk-pause-670840: {Iface:virbr4 ExpiryTime:2025-10-06 15:46:57 +0000 UTC Type:0 Mac:52:54:00:db:88:ba Iaid: IPaddr:192.168.72.41 Prefix:24 Hostname:pause-670840 Clientid:01:52:54:00:db:88:ba}
	I1006 14:47:46.555973  780645 main.go:141] libmachine: (pause-670840) DBG | domain pause-670840 has defined IP address 192.168.72.41 and MAC address 52:54:00:db:88:ba in network mk-pause-670840
	I1006 14:47:46.556269  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHPort
	I1006 14:47:46.556548  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHKeyPath
	I1006 14:47:46.556771  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHUsername
	I1006 14:47:46.557011  780645 sshutil.go:53] new ssh client: &{IP:192.168.72.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/pause-670840/id_rsa Username:docker}
	I1006 14:47:46.648395  780645 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 14:47:46.654450  780645 info.go:137] Remote host: Buildroot 2025.02
	I1006 14:47:46.654480  780645 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-739942/.minikube/addons for local assets ...
	I1006 14:47:46.654558  780645 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-739942/.minikube/files for local assets ...
	I1006 14:47:46.654672  780645 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-739942/.minikube/files/etc/ssl/certs/7438512.pem -> 7438512.pem in /etc/ssl/certs
	I1006 14:47:46.654806  780645 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1006 14:47:46.668862  780645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/files/etc/ssl/certs/7438512.pem --> /etc/ssl/certs/7438512.pem (1708 bytes)
	I1006 14:47:46.702103  780645 start.go:296] duration metric: took 150.97698ms for postStartSetup
	I1006 14:47:46.702163  780645 fix.go:56] duration metric: took 6.507190638s for fixHost
	I1006 14:47:46.702191  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHHostname
	I1006 14:47:46.705511  780645 main.go:141] libmachine: (pause-670840) DBG | domain pause-670840 has defined MAC address 52:54:00:db:88:ba in network mk-pause-670840
	I1006 14:47:46.705982  780645 main.go:141] libmachine: (pause-670840) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:88:ba", ip: ""} in network mk-pause-670840: {Iface:virbr4 ExpiryTime:2025-10-06 15:46:57 +0000 UTC Type:0 Mac:52:54:00:db:88:ba Iaid: IPaddr:192.168.72.41 Prefix:24 Hostname:pause-670840 Clientid:01:52:54:00:db:88:ba}
	I1006 14:47:46.706038  780645 main.go:141] libmachine: (pause-670840) DBG | domain pause-670840 has defined IP address 192.168.72.41 and MAC address 52:54:00:db:88:ba in network mk-pause-670840
	I1006 14:47:46.706329  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHPort
	I1006 14:47:46.706561  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHKeyPath
	I1006 14:47:46.706785  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHKeyPath
	I1006 14:47:46.706994  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHUsername
	I1006 14:47:46.707213  780645 main.go:141] libmachine: Using SSH client type: native
	I1006 14:47:46.707476  780645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.72.41 22 <nil> <nil>}
	I1006 14:47:46.707489  780645 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1006 14:47:46.824788  780645 main.go:141] libmachine: SSH cmd err, output: <nil>: 1759762066.822490500
	
	I1006 14:47:46.824817  780645 fix.go:216] guest clock: 1759762066.822490500
	I1006 14:47:46.824828  780645 fix.go:229] Guest: 2025-10-06 14:47:46.8224905 +0000 UTC Remote: 2025-10-06 14:47:46.702169037 +0000 UTC m=+6.684926291 (delta=120.321463ms)
	I1006 14:47:46.824855  780645 fix.go:200] guest clock delta is within tolerance: 120.321463ms
	I1006 14:47:46.824861  780645 start.go:83] releasing machines lock for "pause-670840", held for 6.629929566s
	I1006 14:47:46.824885  780645 main.go:141] libmachine: (pause-670840) Calling .DriverName
	I1006 14:47:46.825267  780645 main.go:141] libmachine: (pause-670840) Calling .GetIP
	I1006 14:47:46.828900  780645 main.go:141] libmachine: (pause-670840) DBG | domain pause-670840 has defined MAC address 52:54:00:db:88:ba in network mk-pause-670840
	I1006 14:47:46.829416  780645 main.go:141] libmachine: (pause-670840) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:88:ba", ip: ""} in network mk-pause-670840: {Iface:virbr4 ExpiryTime:2025-10-06 15:46:57 +0000 UTC Type:0 Mac:52:54:00:db:88:ba Iaid: IPaddr:192.168.72.41 Prefix:24 Hostname:pause-670840 Clientid:01:52:54:00:db:88:ba}
	I1006 14:47:46.829445  780645 main.go:141] libmachine: (pause-670840) DBG | domain pause-670840 has defined IP address 192.168.72.41 and MAC address 52:54:00:db:88:ba in network mk-pause-670840
	I1006 14:47:46.829693  780645 main.go:141] libmachine: (pause-670840) Calling .DriverName
	I1006 14:47:46.830413  780645 main.go:141] libmachine: (pause-670840) Calling .DriverName
	I1006 14:47:46.830662  780645 main.go:141] libmachine: (pause-670840) Calling .DriverName
	I1006 14:47:46.830796  780645 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 14:47:46.830856  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHHostname
	I1006 14:47:46.830919  780645 ssh_runner.go:195] Run: cat /version.json
	I1006 14:47:46.830938  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHHostname
	I1006 14:47:46.834756  780645 main.go:141] libmachine: (pause-670840) DBG | domain pause-670840 has defined MAC address 52:54:00:db:88:ba in network mk-pause-670840
	I1006 14:47:46.834891  780645 main.go:141] libmachine: (pause-670840) DBG | domain pause-670840 has defined MAC address 52:54:00:db:88:ba in network mk-pause-670840
	I1006 14:47:46.835244  780645 main.go:141] libmachine: (pause-670840) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:88:ba", ip: ""} in network mk-pause-670840: {Iface:virbr4 ExpiryTime:2025-10-06 15:46:57 +0000 UTC Type:0 Mac:52:54:00:db:88:ba Iaid: IPaddr:192.168.72.41 Prefix:24 Hostname:pause-670840 Clientid:01:52:54:00:db:88:ba}
	I1006 14:47:46.835280  780645 main.go:141] libmachine: (pause-670840) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:88:ba", ip: ""} in network mk-pause-670840: {Iface:virbr4 ExpiryTime:2025-10-06 15:46:57 +0000 UTC Type:0 Mac:52:54:00:db:88:ba Iaid: IPaddr:192.168.72.41 Prefix:24 Hostname:pause-670840 Clientid:01:52:54:00:db:88:ba}
	I1006 14:47:46.835321  780645 main.go:141] libmachine: (pause-670840) DBG | domain pause-670840 has defined IP address 192.168.72.41 and MAC address 52:54:00:db:88:ba in network mk-pause-670840
	I1006 14:47:46.835337  780645 main.go:141] libmachine: (pause-670840) DBG | domain pause-670840 has defined IP address 192.168.72.41 and MAC address 52:54:00:db:88:ba in network mk-pause-670840
	I1006 14:47:46.835553  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHPort
	I1006 14:47:46.835728  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHPort
	I1006 14:47:46.835818  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHKeyPath
	I1006 14:47:46.835900  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHKeyPath
	I1006 14:47:46.835986  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHUsername
	I1006 14:47:46.836058  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHUsername
	I1006 14:47:46.836290  780645 sshutil.go:53] new ssh client: &{IP:192.168.72.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/pause-670840/id_rsa Username:docker}
	I1006 14:47:46.836304  780645 sshutil.go:53] new ssh client: &{IP:192.168.72.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/pause-670840/id_rsa Username:docker}
	I1006 14:47:46.922445  780645 ssh_runner.go:195] Run: systemctl --version
	I1006 14:47:46.957004  780645 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 14:47:47.116951  780645 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 14:47:47.125508  780645 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 14:47:47.125631  780645 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 14:47:47.138220  780645 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1006 14:47:47.138256  780645 start.go:495] detecting cgroup driver to use...
	I1006 14:47:47.138351  780645 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 14:47:47.162172  780645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 14:47:47.182890  780645 docker.go:218] disabling cri-docker service (if available) ...
	I1006 14:47:47.182957  780645 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 14:47:47.208827  780645 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 14:47:47.229853  780645 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 14:47:47.446718  780645 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 14:47:47.656628  780645 docker.go:234] disabling docker service ...
	I1006 14:47:47.656734  780645 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 14:47:47.690034  780645 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 14:47:47.712894  780645 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 14:47:47.943787  780645 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 14:47:48.125561  780645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 14:47:48.147227  780645 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 14:47:48.176124  780645 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1006 14:47:48.176194  780645 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:47:48.192411  780645 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1006 14:47:48.192508  780645 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:47:48.208512  780645 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:47:48.223634  780645 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:47:48.239630  780645 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 14:47:48.256462  780645 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:47:48.281556  780645 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:47:48.352417  780645 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:47:48.387837  780645 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 14:47:48.418584  780645 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 14:47:48.457937  780645 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:47:48.804638  780645 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 14:47:49.513216  780645 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 14:47:49.513314  780645 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 14:47:49.519615  780645 start.go:563] Will wait 60s for crictl version
	I1006 14:47:49.519711  780645 ssh_runner.go:195] Run: which crictl
	I1006 14:47:49.524999  780645 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1006 14:47:49.566718  780645 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1006 14:47:49.566834  780645 ssh_runner.go:195] Run: crio --version
	I1006 14:47:49.601887  780645 ssh_runner.go:195] Run: crio --version
	I1006 14:47:49.645889  780645 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1006 14:47:49.647376  780645 main.go:141] libmachine: (pause-670840) Calling .GetIP
	I1006 14:47:49.650826  780645 main.go:141] libmachine: (pause-670840) DBG | domain pause-670840 has defined MAC address 52:54:00:db:88:ba in network mk-pause-670840
	I1006 14:47:49.651277  780645 main.go:141] libmachine: (pause-670840) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:88:ba", ip: ""} in network mk-pause-670840: {Iface:virbr4 ExpiryTime:2025-10-06 15:46:57 +0000 UTC Type:0 Mac:52:54:00:db:88:ba Iaid: IPaddr:192.168.72.41 Prefix:24 Hostname:pause-670840 Clientid:01:52:54:00:db:88:ba}
	I1006 14:47:49.651304  780645 main.go:141] libmachine: (pause-670840) DBG | domain pause-670840 has defined IP address 192.168.72.41 and MAC address 52:54:00:db:88:ba in network mk-pause-670840
	I1006 14:47:49.651721  780645 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1006 14:47:49.657780  780645 kubeadm.go:883] updating cluster {Name:pause-670840 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1
ClusterName:pause-670840 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.41 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidi
a-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 14:47:49.657948  780645 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:47:49.658041  780645 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 14:47:49.716165  780645 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 14:47:49.716200  780645 crio.go:433] Images already preloaded, skipping extraction
	I1006 14:47:49.716266  780645 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 14:47:49.767788  780645 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 14:47:49.767813  780645 cache_images.go:85] Images are preloaded, skipping loading
	I1006 14:47:49.767821  780645 kubeadm.go:934] updating node { 192.168.72.41 8443 v1.34.1 crio true true} ...
	I1006 14:47:49.767996  780645 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-670840 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.41
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-670840 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 14:47:49.768090  780645 ssh_runner.go:195] Run: crio config
	I1006 14:47:49.824306  780645 cni.go:84] Creating CNI manager for ""
	I1006 14:47:49.824344  780645 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1006 14:47:49.824384  780645 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 14:47:49.824424  780645 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.41 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-670840 NodeName:pause-670840 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.41"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.41 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 14:47:49.824678  780645 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.41
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-670840"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.41"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.41"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 14:47:49.824797  780645 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 14:47:49.839381  780645 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 14:47:49.839470  780645 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 14:47:49.855692  780645 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1006 14:47:49.880958  780645 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 14:47:49.907128  780645 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1006 14:47:49.933706  780645 ssh_runner.go:195] Run: grep 192.168.72.41	control-plane.minikube.internal$ /etc/hosts
	I1006 14:47:49.940022  780645 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:47:47.472653  780799 out.go:252] * Restarting existing kvm2 VM for "kubernetes-upgrade-317912" ...
	I1006 14:47:47.472708  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .Start
	I1006 14:47:47.472919  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) starting domain...
	I1006 14:47:47.473012  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) ensuring networks are active...
	I1006 14:47:47.473881  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) Ensuring network default is active
	I1006 14:47:47.474327  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) Ensuring network mk-kubernetes-upgrade-317912 is active
	I1006 14:47:47.474786  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) getting domain XML...
	I1006 14:47:47.475922  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | starting domain XML:
	I1006 14:47:47.475947  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | <domain type='kvm'>
	I1006 14:47:47.475959  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |   <name>kubernetes-upgrade-317912</name>
	I1006 14:47:47.475967  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |   <uuid>de1160c4-cb0a-4372-9c6d-3a178b57a524</uuid>
	I1006 14:47:47.475976  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |   <memory unit='KiB'>3145728</memory>
	I1006 14:47:47.476013  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I1006 14:47:47.476023  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |   <vcpu placement='static'>2</vcpu>
	I1006 14:47:47.476030  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |   <os>
	I1006 14:47:47.476041  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1006 14:47:47.476051  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |     <boot dev='cdrom'/>
	I1006 14:47:47.476061  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |     <boot dev='hd'/>
	I1006 14:47:47.476071  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |     <bootmenu enable='no'/>
	I1006 14:47:47.476082  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |   </os>
	I1006 14:47:47.476093  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |   <features>
	I1006 14:47:47.476170  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |     <acpi/>
	I1006 14:47:47.476205  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |     <apic/>
	I1006 14:47:47.476225  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |     <pae/>
	I1006 14:47:47.476237  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |   </features>
	I1006 14:47:47.476254  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1006 14:47:47.476265  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |   <clock offset='utc'/>
	I1006 14:47:47.476288  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |   <on_poweroff>destroy</on_poweroff>
	I1006 14:47:47.476300  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |   <on_reboot>restart</on_reboot>
	I1006 14:47:47.476323  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |   <on_crash>destroy</on_crash>
	I1006 14:47:47.476341  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |   <devices>
	I1006 14:47:47.476381  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1006 14:47:47.476403  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |     <disk type='file' device='cdrom'>
	I1006 14:47:47.476419  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |       <driver name='qemu' type='raw'/>
	I1006 14:47:47.476445  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |       <source file='/home/jenkins/minikube-integration/21701-739942/.minikube/machines/kubernetes-upgrade-317912/boot2docker.iso'/>
	I1006 14:47:47.476462  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |       <target dev='hdc' bus='scsi'/>
	I1006 14:47:47.476472  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |       <readonly/>
	I1006 14:47:47.476485  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1006 14:47:47.476496  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |     </disk>
	I1006 14:47:47.476501  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |     <disk type='file' device='disk'>
	I1006 14:47:47.476515  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1006 14:47:47.476530  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |       <source file='/home/jenkins/minikube-integration/21701-739942/.minikube/machines/kubernetes-upgrade-317912/kubernetes-upgrade-317912.rawdisk'/>
	I1006 14:47:47.476563  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |       <target dev='hda' bus='virtio'/>
	I1006 14:47:47.476598  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1006 14:47:47.476615  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |     </disk>
	I1006 14:47:47.476626  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1006 14:47:47.476642  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1006 14:47:47.476656  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |     </controller>
	I1006 14:47:47.476667  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1006 14:47:47.476677  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1006 14:47:47.476688  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1006 14:47:47.476700  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |     </controller>
	I1006 14:47:47.476716  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |     <interface type='network'>
	I1006 14:47:47.476831  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |       <mac address='52:54:00:db:d0:2e'/>
	I1006 14:47:47.476848  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |       <source network='mk-kubernetes-upgrade-317912'/>
	I1006 14:47:47.476858  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |       <model type='virtio'/>
	I1006 14:47:47.476869  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1006 14:47:47.476881  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |     </interface>
	I1006 14:47:47.476888  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |     <interface type='network'>
	I1006 14:47:47.476898  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |       <mac address='52:54:00:b4:89:83'/>
	I1006 14:47:47.476906  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |       <source network='default'/>
	I1006 14:47:47.476916  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |       <model type='virtio'/>
	I1006 14:47:47.476967  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1006 14:47:47.476981  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |     </interface>
	I1006 14:47:47.476988  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |     <serial type='pty'>
	I1006 14:47:47.477021  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |       <target type='isa-serial' port='0'>
	I1006 14:47:47.477033  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |         <model name='isa-serial'/>
	I1006 14:47:47.477050  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |       </target>
	I1006 14:47:47.477064  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |     </serial>
	I1006 14:47:47.477076  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |     <console type='pty'>
	I1006 14:47:47.477095  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |       <target type='serial' port='0'/>
	I1006 14:47:47.477119  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |     </console>
	I1006 14:47:47.477138  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |     <input type='mouse' bus='ps2'/>
	I1006 14:47:47.477151  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |     <input type='keyboard' bus='ps2'/>
	I1006 14:47:47.477184  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |     <audio id='1' type='none'/>
	I1006 14:47:47.477197  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |     <memballoon model='virtio'>
	I1006 14:47:47.477206  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1006 14:47:47.477214  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |     </memballoon>
	I1006 14:47:47.477226  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |     <rng model='virtio'>
	I1006 14:47:47.477239  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |       <backend model='random'>/dev/random</backend>
	I1006 14:47:47.477252  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1006 14:47:47.477265  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |     </rng>
	I1006 14:47:47.477276  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |   </devices>
	I1006 14:47:47.477287  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | </domain>
	I1006 14:47:47.477295  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | 
	I1006 14:47:47.941523  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) waiting for domain to start...
	I1006 14:47:47.943152  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) domain is now running
	I1006 14:47:47.943177  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) waiting for IP...
	I1006 14:47:47.944321  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | domain kubernetes-upgrade-317912 has defined MAC address 52:54:00:db:d0:2e in network mk-kubernetes-upgrade-317912
	I1006 14:47:47.945238  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) found domain IP: 192.168.39.45
	I1006 14:47:47.945283  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) reserving static IP address...
	I1006 14:47:47.945302  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | domain kubernetes-upgrade-317912 has current primary IP address 192.168.39.45 and MAC address 52:54:00:db:d0:2e in network mk-kubernetes-upgrade-317912
	I1006 14:47:47.945867  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | found host DHCP lease matching {name: "kubernetes-upgrade-317912", mac: "52:54:00:db:d0:2e", ip: "192.168.39.45"} in network mk-kubernetes-upgrade-317912: {Iface:virbr1 ExpiryTime:2025-10-06 15:47:19 +0000 UTC Type:0 Mac:52:54:00:db:d0:2e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:kubernetes-upgrade-317912 Clientid:01:52:54:00:db:d0:2e}
	I1006 14:47:47.945931  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) reserved static IP address 192.168.39.45 for domain kubernetes-upgrade-317912
	I1006 14:47:47.945955  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | skip adding static IP to network mk-kubernetes-upgrade-317912 - found existing host DHCP lease matching {name: "kubernetes-upgrade-317912", mac: "52:54:00:db:d0:2e", ip: "192.168.39.45"}
	I1006 14:47:47.945980  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | Getting to WaitForSSH function...
	I1006 14:47:47.945997  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) waiting for SSH...
	I1006 14:47:47.949136  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | domain kubernetes-upgrade-317912 has defined MAC address 52:54:00:db:d0:2e in network mk-kubernetes-upgrade-317912
	I1006 14:47:47.949563  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:d0:2e", ip: ""} in network mk-kubernetes-upgrade-317912: {Iface:virbr1 ExpiryTime:2025-10-06 15:47:19 +0000 UTC Type:0 Mac:52:54:00:db:d0:2e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:kubernetes-upgrade-317912 Clientid:01:52:54:00:db:d0:2e}
	I1006 14:47:47.949625  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | domain kubernetes-upgrade-317912 has defined IP address 192.168.39.45 and MAC address 52:54:00:db:d0:2e in network mk-kubernetes-upgrade-317912
	I1006 14:47:47.949887  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | Using SSH client type: external
	I1006 14:47:47.949914  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | Using SSH private key: /home/jenkins/minikube-integration/21701-739942/.minikube/machines/kubernetes-upgrade-317912/id_rsa (-rw-------)
	I1006 14:47:47.950053  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.45 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21701-739942/.minikube/machines/kubernetes-upgrade-317912/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1006 14:47:47.950082  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | About to run SSH command:
	I1006 14:47:47.950101  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | exit 0
	I1006 14:47:50.111937  780645 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 14:47:50.132196  780645 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/pause-670840 for IP: 192.168.72.41
	I1006 14:47:50.132221  780645 certs.go:195] generating shared ca certs ...
	I1006 14:47:50.132237  780645 certs.go:227] acquiring lock for ca certs: {Name:mkac26b60e1fd10143a5d4dc5ca0de64e9dd4f98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:47:50.132434  780645 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-739942/.minikube/ca.key
	I1006 14:47:50.132497  780645 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-739942/.minikube/proxy-client-ca.key
	I1006 14:47:50.132508  780645 certs.go:257] generating profile certs ...
	I1006 14:47:50.132640  780645 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/pause-670840/client.key
	I1006 14:47:50.132730  780645 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/pause-670840/apiserver.key.24981bcd
	I1006 14:47:50.132788  780645 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/pause-670840/proxy-client.key
	I1006 14:47:50.132958  780645 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-739942/.minikube/certs/743851.pem (1338 bytes)
	W1006 14:47:50.132989  780645 certs.go:480] ignoring /home/jenkins/minikube-integration/21701-739942/.minikube/certs/743851_empty.pem, impossibly tiny 0 bytes
	I1006 14:47:50.132997  780645 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-739942/.minikube/certs/ca-key.pem (1679 bytes)
	I1006 14:47:50.133023  780645 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-739942/.minikube/certs/ca.pem (1078 bytes)
	I1006 14:47:50.133052  780645 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-739942/.minikube/certs/cert.pem (1123 bytes)
	I1006 14:47:50.133084  780645 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-739942/.minikube/certs/key.pem (1679 bytes)
	I1006 14:47:50.133135  780645 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-739942/.minikube/files/etc/ssl/certs/7438512.pem (1708 bytes)
	I1006 14:47:50.133955  780645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 14:47:50.169976  780645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1006 14:47:50.205090  780645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 14:47:50.241959  780645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1006 14:47:50.277867  780645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/pause-670840/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1006 14:47:50.310839  780645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/pause-670840/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1006 14:47:50.352120  780645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/pause-670840/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 14:47:50.388700  780645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/pause-670840/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1006 14:47:50.501773  780645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/files/etc/ssl/certs/7438512.pem --> /usr/share/ca-certificates/7438512.pem (1708 bytes)
	I1006 14:47:50.564110  780645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 14:47:50.661959  780645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/certs/743851.pem --> /usr/share/ca-certificates/743851.pem (1338 bytes)
	I1006 14:47:50.754090  780645 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 14:47:50.809344  780645 ssh_runner.go:195] Run: openssl version
	I1006 14:47:50.821382  780645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 14:47:50.846163  780645 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:47:50.856996  780645 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 13:50 /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:47:50.857078  780645 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:47:50.874879  780645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 14:47:50.899354  780645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/743851.pem && ln -fs /usr/share/ca-certificates/743851.pem /etc/ssl/certs/743851.pem"
	I1006 14:47:50.920272  780645 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/743851.pem
	I1006 14:47:50.930864  780645 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  6 13:59 /usr/share/ca-certificates/743851.pem
	I1006 14:47:50.930957  780645 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/743851.pem
	I1006 14:47:50.945398  780645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/743851.pem /etc/ssl/certs/51391683.0"
	I1006 14:47:50.967680  780645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7438512.pem && ln -fs /usr/share/ca-certificates/7438512.pem /etc/ssl/certs/7438512.pem"
	I1006 14:47:51.000269  780645 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7438512.pem
	I1006 14:47:51.009946  780645 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  6 13:59 /usr/share/ca-certificates/7438512.pem
	I1006 14:47:51.010040  780645 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7438512.pem
	I1006 14:47:51.021415  780645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7438512.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 14:47:51.036430  780645 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 14:47:51.048362  780645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1006 14:47:51.059802  780645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1006 14:47:51.074755  780645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1006 14:47:51.097221  780645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1006 14:47:51.111515  780645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1006 14:47:51.126076  780645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1006 14:47:51.136261  780645 kubeadm.go:400] StartCluster: {Name:pause-670840 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Cl
usterName:pause-670840 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.41 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-g
pu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:47:51.136437  780645 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 14:47:51.136534  780645 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 14:47:51.213622  780645 cri.go:89] found id: "24692b5695875a0e07b8044479544bd940fa12fb399ecbcfcb42c79741c24af1"
	I1006 14:47:51.213657  780645 cri.go:89] found id: "6f1f73cc4a476e62f0bd839c947f2ab1c2014a3e06e2060ee96869e039e1c125"
	I1006 14:47:51.213664  780645 cri.go:89] found id: "8aeec49d42a25a681a66edb73e04dc51bdd60cc474a3560ed674b9a0c9ba6dc7"
	I1006 14:47:51.213670  780645 cri.go:89] found id: "0dd1035d820529039269bde549155f111bc74b5b3b5019542983cf0d262d42f9"
	I1006 14:47:51.213675  780645 cri.go:89] found id: "f0ca3c7483e87d53990c734308aedb63f5b38fb6a25bfd03e22d7dec5a050cfb"
	I1006 14:47:51.213680  780645 cri.go:89] found id: "3b13529d4e4132c35fa76f6df0347178f1c6dc37e51ffc1fd1f6cd6c4d317d1e"
	I1006 14:47:51.213685  780645 cri.go:89] found id: "97e6c2494ec684f14cf5a4ab45bd825b7029c38692f2aabffd6254a6b52403a8"
	I1006 14:47:51.213689  780645 cri.go:89] found id: ""
	I1006 14:47:51.213757  780645 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-670840 -n pause-670840
helpers_test.go:269: (dbg) Run:  kubectl --context pause-670840 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-670840 -n pause-670840
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-670840 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-670840 logs -n 25: (1.71320394s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
	│ ssh     │ -p cilium-702246 sudo containerd config dump                                                                                                                                                                                                        │ cilium-702246             │ jenkins │ v1.37.0 │ 06 Oct 25 14:45 UTC │                     │
	│ ssh     │ -p cilium-702246 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-702246             │ jenkins │ v1.37.0 │ 06 Oct 25 14:45 UTC │                     │
	│ ssh     │ -p cilium-702246 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-702246             │ jenkins │ v1.37.0 │ 06 Oct 25 14:45 UTC │                     │
	│ ssh     │ -p cilium-702246 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-702246             │ jenkins │ v1.37.0 │ 06 Oct 25 14:45 UTC │                     │
	│ ssh     │ -p cilium-702246 sudo crio config                                                                                                                                                                                                                   │ cilium-702246             │ jenkins │ v1.37.0 │ 06 Oct 25 14:45 UTC │                     │
	│ delete  │ -p cilium-702246                                                                                                                                                                                                                                    │ cilium-702246             │ jenkins │ v1.37.0 │ 06 Oct 25 14:45 UTC │ 06 Oct 25 14:45 UTC │
	│ start   │ -p cert-expiration-435206 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                    │ cert-expiration-435206    │ jenkins │ v1.37.0 │ 06 Oct 25 14:45 UTC │ 06 Oct 25 14:46 UTC │
	│ start   │ -p NoKubernetes-419392 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                     │ NoKubernetes-419392       │ jenkins │ v1.37.0 │ 06 Oct 25 14:45 UTC │ 06 Oct 25 14:46 UTC │
	│ start   │ -p running-upgrade-455354 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                  │ running-upgrade-455354    │ jenkins │ v1.37.0 │ 06 Oct 25 14:45 UTC │ 06 Oct 25 14:46 UTC │
	│ ssh     │ force-systemd-flag-640885 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                                │ force-systemd-flag-640885 │ jenkins │ v1.37.0 │ 06 Oct 25 14:45 UTC │ 06 Oct 25 14:45 UTC │
	│ delete  │ -p force-systemd-flag-640885                                                                                                                                                                                                                        │ force-systemd-flag-640885 │ jenkins │ v1.37.0 │ 06 Oct 25 14:45 UTC │ 06 Oct 25 14:45 UTC │
	│ start   │ -p cert-options-809645 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ cert-options-809645       │ jenkins │ v1.37.0 │ 06 Oct 25 14:45 UTC │ 06 Oct 25 14:46 UTC │
	│ delete  │ -p NoKubernetes-419392                                                                                                                                                                                                                              │ NoKubernetes-419392       │ jenkins │ v1.37.0 │ 06 Oct 25 14:46 UTC │ 06 Oct 25 14:46 UTC │
	│ start   │ -p NoKubernetes-419392 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                     │ NoKubernetes-419392       │ jenkins │ v1.37.0 │ 06 Oct 25 14:46 UTC │ 06 Oct 25 14:46 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile running-upgrade-455354 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker                                                                                         │ running-upgrade-455354    │ jenkins │ v1.37.0 │ 06 Oct 25 14:46 UTC │                     │
	│ delete  │ -p running-upgrade-455354                                                                                                                                                                                                                           │ running-upgrade-455354    │ jenkins │ v1.37.0 │ 06 Oct 25 14:46 UTC │ 06 Oct 25 14:46 UTC │
	│ start   │ -p pause-670840 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                 │ pause-670840              │ jenkins │ v1.37.0 │ 06 Oct 25 14:46 UTC │ 06 Oct 25 14:47 UTC │
	│ ssh     │ -p NoKubernetes-419392 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-419392       │ jenkins │ v1.37.0 │ 06 Oct 25 14:46 UTC │                     │
	│ ssh     │ cert-options-809645 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-809645       │ jenkins │ v1.37.0 │ 06 Oct 25 14:46 UTC │ 06 Oct 25 14:46 UTC │
	│ ssh     │ -p cert-options-809645 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-809645       │ jenkins │ v1.37.0 │ 06 Oct 25 14:46 UTC │ 06 Oct 25 14:46 UTC │
	│ delete  │ -p cert-options-809645                                                                                                                                                                                                                              │ cert-options-809645       │ jenkins │ v1.37.0 │ 06 Oct 25 14:46 UTC │ 06 Oct 25 14:46 UTC │
	│ start   │ -p kubernetes-upgrade-317912 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                  │ kubernetes-upgrade-317912 │ jenkins │ v1.37.0 │ 06 Oct 25 14:46 UTC │ 06 Oct 25 14:47 UTC │
	│ start   │ -p pause-670840 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                          │ pause-670840              │ jenkins │ v1.37.0 │ 06 Oct 25 14:47 UTC │ 06 Oct 25 14:48 UTC │
	│ stop    │ -p kubernetes-upgrade-317912                                                                                                                                                                                                                        │ kubernetes-upgrade-317912 │ jenkins │ v1.37.0 │ 06 Oct 25 14:47 UTC │ 06 Oct 25 14:47 UTC │
	│ start   │ -p kubernetes-upgrade-317912 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                  │ kubernetes-upgrade-317912 │ jenkins │ v1.37.0 │ 06 Oct 25 14:47 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 14:47:47
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 14:47:47.325966  780799 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:47:47.326078  780799 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:47:47.326086  780799 out.go:374] Setting ErrFile to fd 2...
	I1006 14:47:47.326090  780799 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:47:47.326322  780799 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-739942/.minikube/bin
	I1006 14:47:47.326761  780799 out.go:368] Setting JSON to false
	I1006 14:47:47.327720  780799 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":16218,"bootTime":1759745849,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 14:47:47.327842  780799 start.go:140] virtualization: kvm guest
	I1006 14:47:47.329989  780799 out.go:179] * [kubernetes-upgrade-317912] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1006 14:47:47.331419  780799 notify.go:220] Checking for updates...
	I1006 14:47:47.331423  780799 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 14:47:47.332911  780799 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 14:47:47.334439  780799 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-739942/kubeconfig
	I1006 14:47:47.335671  780799 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-739942/.minikube
	I1006 14:47:47.336940  780799 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1006 14:47:47.338577  780799 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 14:47:47.340444  780799 config.go:182] Loaded profile config "kubernetes-upgrade-317912": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1006 14:47:47.340887  780799 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 14:47:47.340947  780799 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 14:47:47.355245  780799 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37429
	I1006 14:47:47.356084  780799 main.go:141] libmachine: () Calling .GetVersion
	I1006 14:47:47.356743  780799 main.go:141] libmachine: Using API Version  1
	I1006 14:47:47.356771  780799 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 14:47:47.357223  780799 main.go:141] libmachine: () Calling .GetMachineName
	I1006 14:47:47.357461  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .DriverName
	I1006 14:47:47.357793  780799 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 14:47:47.358248  780799 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 14:47:47.358307  780799 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 14:47:47.373396  780799 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39091
	I1006 14:47:47.374059  780799 main.go:141] libmachine: () Calling .GetVersion
	I1006 14:47:47.374725  780799 main.go:141] libmachine: Using API Version  1
	I1006 14:47:47.374753  780799 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 14:47:47.375180  780799 main.go:141] libmachine: () Calling .GetMachineName
	I1006 14:47:47.375449  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .DriverName
	I1006 14:47:47.411792  780799 out.go:179] * Using the kvm2 driver based on existing profile
	I1006 14:47:47.413037  780799 start.go:304] selected driver: kvm2
	I1006 14:47:47.413055  780799 start.go:924] validating driver "kvm2" against &{Name:kubernetes-upgrade-317912 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-317912 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.45 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:47:47.413163  780799 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 14:47:47.413893  780799 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 14:47:47.413986  780799 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21701-739942/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1006 14:47:47.428777  780799 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1006 14:47:47.428825  780799 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21701-739942/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1006 14:47:47.444577  780799 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1006 14:47:47.445161  780799 cni.go:84] Creating CNI manager for ""
	I1006 14:47:47.445240  780799 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1006 14:47:47.445295  780799 start.go:348] cluster config:
	{Name:kubernetes-upgrade-317912 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-317912 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.45 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:47:47.445446  780799 iso.go:125] acquiring lock: {Name:mk8de6812bb58933af0bc6eb1d955bf118a3bcec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 14:47:47.447625  780799 out.go:179] * Starting "kubernetes-upgrade-317912" primary control-plane node in "kubernetes-upgrade-317912" cluster
	I1006 14:47:47.448844  780799 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:47:47.448897  780799 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-739942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1006 14:47:47.448909  780799 cache.go:58] Caching tarball of preloaded images
	I1006 14:47:47.449005  780799 preload.go:233] Found /home/jenkins/minikube-integration/21701-739942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1006 14:47:47.449030  780799 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1006 14:47:47.449149  780799 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/kubernetes-upgrade-317912/config.json ...
	I1006 14:47:47.449364  780799 start.go:360] acquireMachinesLock for kubernetes-upgrade-317912: {Name:mkc5be1cfc8fcefa1839aef4c67a376cc5095e30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1006 14:47:47.449413  780799 start.go:364] duration metric: took 28.032µs to acquireMachinesLock for "kubernetes-upgrade-317912"
	I1006 14:47:47.449428  780799 start.go:96] Skipping create...Using existing machine configuration
	I1006 14:47:47.449433  780799 fix.go:54] fixHost starting: 
	I1006 14:47:47.449711  780799 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 14:47:47.449746  780799 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 14:47:47.463542  780799 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37861
	I1006 14:47:47.464207  780799 main.go:141] libmachine: () Calling .GetVersion
	I1006 14:47:47.464841  780799 main.go:141] libmachine: Using API Version  1
	I1006 14:47:47.464870  780799 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 14:47:47.465358  780799 main.go:141] libmachine: () Calling .GetMachineName
	I1006 14:47:47.465613  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .DriverName
	I1006 14:47:47.465815  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .GetState
	I1006 14:47:47.468111  780799 fix.go:112] recreateIfNeeded on kubernetes-upgrade-317912: state=Stopped err=<nil>
	I1006 14:47:47.468139  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .DriverName
	W1006 14:47:47.468340  780799 fix.go:138] unexpected machine state, will restart: <nil>
	I1006 14:47:46.551047  780645 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 14:47:46.551085  780645 machine.go:96] duration metric: took 6.333587657s to provisionDockerMachine
	I1006 14:47:46.551103  780645 start.go:293] postStartSetup for "pause-670840" (driver="kvm2")
	I1006 14:47:46.551119  780645 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 14:47:46.551144  780645 main.go:141] libmachine: (pause-670840) Calling .DriverName
	I1006 14:47:46.551574  780645 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 14:47:46.551630  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHHostname
	I1006 14:47:46.555376  780645 main.go:141] libmachine: (pause-670840) DBG | domain pause-670840 has defined MAC address 52:54:00:db:88:ba in network mk-pause-670840
	I1006 14:47:46.555943  780645 main.go:141] libmachine: (pause-670840) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:88:ba", ip: ""} in network mk-pause-670840: {Iface:virbr4 ExpiryTime:2025-10-06 15:46:57 +0000 UTC Type:0 Mac:52:54:00:db:88:ba Iaid: IPaddr:192.168.72.41 Prefix:24 Hostname:pause-670840 Clientid:01:52:54:00:db:88:ba}
	I1006 14:47:46.555973  780645 main.go:141] libmachine: (pause-670840) DBG | domain pause-670840 has defined IP address 192.168.72.41 and MAC address 52:54:00:db:88:ba in network mk-pause-670840
	I1006 14:47:46.556269  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHPort
	I1006 14:47:46.556548  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHKeyPath
	I1006 14:47:46.556771  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHUsername
	I1006 14:47:46.557011  780645 sshutil.go:53] new ssh client: &{IP:192.168.72.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/pause-670840/id_rsa Username:docker}
	I1006 14:47:46.648395  780645 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 14:47:46.654450  780645 info.go:137] Remote host: Buildroot 2025.02
	I1006 14:47:46.654480  780645 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-739942/.minikube/addons for local assets ...
	I1006 14:47:46.654558  780645 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-739942/.minikube/files for local assets ...
	I1006 14:47:46.654672  780645 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-739942/.minikube/files/etc/ssl/certs/7438512.pem -> 7438512.pem in /etc/ssl/certs
	I1006 14:47:46.654806  780645 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1006 14:47:46.668862  780645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/files/etc/ssl/certs/7438512.pem --> /etc/ssl/certs/7438512.pem (1708 bytes)
	I1006 14:47:46.702103  780645 start.go:296] duration metric: took 150.97698ms for postStartSetup
	I1006 14:47:46.702163  780645 fix.go:56] duration metric: took 6.507190638s for fixHost
	I1006 14:47:46.702191  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHHostname
	I1006 14:47:46.705511  780645 main.go:141] libmachine: (pause-670840) DBG | domain pause-670840 has defined MAC address 52:54:00:db:88:ba in network mk-pause-670840
	I1006 14:47:46.705982  780645 main.go:141] libmachine: (pause-670840) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:88:ba", ip: ""} in network mk-pause-670840: {Iface:virbr4 ExpiryTime:2025-10-06 15:46:57 +0000 UTC Type:0 Mac:52:54:00:db:88:ba Iaid: IPaddr:192.168.72.41 Prefix:24 Hostname:pause-670840 Clientid:01:52:54:00:db:88:ba}
	I1006 14:47:46.706038  780645 main.go:141] libmachine: (pause-670840) DBG | domain pause-670840 has defined IP address 192.168.72.41 and MAC address 52:54:00:db:88:ba in network mk-pause-670840
	I1006 14:47:46.706329  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHPort
	I1006 14:47:46.706561  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHKeyPath
	I1006 14:47:46.706785  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHKeyPath
	I1006 14:47:46.706994  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHUsername
	I1006 14:47:46.707213  780645 main.go:141] libmachine: Using SSH client type: native
	I1006 14:47:46.707476  780645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.72.41 22 <nil> <nil>}
	I1006 14:47:46.707489  780645 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1006 14:47:46.824788  780645 main.go:141] libmachine: SSH cmd err, output: <nil>: 1759762066.822490500
	
	I1006 14:47:46.824817  780645 fix.go:216] guest clock: 1759762066.822490500
	I1006 14:47:46.824828  780645 fix.go:229] Guest: 2025-10-06 14:47:46.8224905 +0000 UTC Remote: 2025-10-06 14:47:46.702169037 +0000 UTC m=+6.684926291 (delta=120.321463ms)
	I1006 14:47:46.824855  780645 fix.go:200] guest clock delta is within tolerance: 120.321463ms
	I1006 14:47:46.824861  780645 start.go:83] releasing machines lock for "pause-670840", held for 6.629929566s
	I1006 14:47:46.824885  780645 main.go:141] libmachine: (pause-670840) Calling .DriverName
	I1006 14:47:46.825267  780645 main.go:141] libmachine: (pause-670840) Calling .GetIP
	I1006 14:47:46.828900  780645 main.go:141] libmachine: (pause-670840) DBG | domain pause-670840 has defined MAC address 52:54:00:db:88:ba in network mk-pause-670840
	I1006 14:47:46.829416  780645 main.go:141] libmachine: (pause-670840) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:88:ba", ip: ""} in network mk-pause-670840: {Iface:virbr4 ExpiryTime:2025-10-06 15:46:57 +0000 UTC Type:0 Mac:52:54:00:db:88:ba Iaid: IPaddr:192.168.72.41 Prefix:24 Hostname:pause-670840 Clientid:01:52:54:00:db:88:ba}
	I1006 14:47:46.829445  780645 main.go:141] libmachine: (pause-670840) DBG | domain pause-670840 has defined IP address 192.168.72.41 and MAC address 52:54:00:db:88:ba in network mk-pause-670840
	I1006 14:47:46.829693  780645 main.go:141] libmachine: (pause-670840) Calling .DriverName
	I1006 14:47:46.830413  780645 main.go:141] libmachine: (pause-670840) Calling .DriverName
	I1006 14:47:46.830662  780645 main.go:141] libmachine: (pause-670840) Calling .DriverName
	I1006 14:47:46.830796  780645 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 14:47:46.830856  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHHostname
	I1006 14:47:46.830919  780645 ssh_runner.go:195] Run: cat /version.json
	I1006 14:47:46.830938  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHHostname
	I1006 14:47:46.834756  780645 main.go:141] libmachine: (pause-670840) DBG | domain pause-670840 has defined MAC address 52:54:00:db:88:ba in network mk-pause-670840
	I1006 14:47:46.834891  780645 main.go:141] libmachine: (pause-670840) DBG | domain pause-670840 has defined MAC address 52:54:00:db:88:ba in network mk-pause-670840
	I1006 14:47:46.835244  780645 main.go:141] libmachine: (pause-670840) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:88:ba", ip: ""} in network mk-pause-670840: {Iface:virbr4 ExpiryTime:2025-10-06 15:46:57 +0000 UTC Type:0 Mac:52:54:00:db:88:ba Iaid: IPaddr:192.168.72.41 Prefix:24 Hostname:pause-670840 Clientid:01:52:54:00:db:88:ba}
	I1006 14:47:46.835280  780645 main.go:141] libmachine: (pause-670840) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:88:ba", ip: ""} in network mk-pause-670840: {Iface:virbr4 ExpiryTime:2025-10-06 15:46:57 +0000 UTC Type:0 Mac:52:54:00:db:88:ba Iaid: IPaddr:192.168.72.41 Prefix:24 Hostname:pause-670840 Clientid:01:52:54:00:db:88:ba}
	I1006 14:47:46.835321  780645 main.go:141] libmachine: (pause-670840) DBG | domain pause-670840 has defined IP address 192.168.72.41 and MAC address 52:54:00:db:88:ba in network mk-pause-670840
	I1006 14:47:46.835337  780645 main.go:141] libmachine: (pause-670840) DBG | domain pause-670840 has defined IP address 192.168.72.41 and MAC address 52:54:00:db:88:ba in network mk-pause-670840
	I1006 14:47:46.835553  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHPort
	I1006 14:47:46.835728  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHPort
	I1006 14:47:46.835818  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHKeyPath
	I1006 14:47:46.835900  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHKeyPath
	I1006 14:47:46.835986  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHUsername
	I1006 14:47:46.836058  780645 main.go:141] libmachine: (pause-670840) Calling .GetSSHUsername
	I1006 14:47:46.836290  780645 sshutil.go:53] new ssh client: &{IP:192.168.72.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/pause-670840/id_rsa Username:docker}
	I1006 14:47:46.836304  780645 sshutil.go:53] new ssh client: &{IP:192.168.72.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/pause-670840/id_rsa Username:docker}
	I1006 14:47:46.922445  780645 ssh_runner.go:195] Run: systemctl --version
	I1006 14:47:46.957004  780645 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 14:47:47.116951  780645 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 14:47:47.125508  780645 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 14:47:47.125631  780645 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 14:47:47.138220  780645 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1006 14:47:47.138256  780645 start.go:495] detecting cgroup driver to use...
	I1006 14:47:47.138351  780645 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 14:47:47.162172  780645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 14:47:47.182890  780645 docker.go:218] disabling cri-docker service (if available) ...
	I1006 14:47:47.182957  780645 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 14:47:47.208827  780645 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 14:47:47.229853  780645 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 14:47:47.446718  780645 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 14:47:47.656628  780645 docker.go:234] disabling docker service ...
	I1006 14:47:47.656734  780645 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 14:47:47.690034  780645 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 14:47:47.712894  780645 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 14:47:47.943787  780645 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 14:47:48.125561  780645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 14:47:48.147227  780645 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 14:47:48.176124  780645 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1006 14:47:48.176194  780645 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:47:48.192411  780645 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1006 14:47:48.192508  780645 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:47:48.208512  780645 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:47:48.223634  780645 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:47:48.239630  780645 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 14:47:48.256462  780645 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:47:48.281556  780645 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:47:48.352417  780645 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:47:48.387837  780645 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 14:47:48.418584  780645 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 14:47:48.457937  780645 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:47:48.804638  780645 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 14:47:49.513216  780645 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 14:47:49.513314  780645 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 14:47:49.519615  780645 start.go:563] Will wait 60s for crictl version
	I1006 14:47:49.519711  780645 ssh_runner.go:195] Run: which crictl
	I1006 14:47:49.524999  780645 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1006 14:47:49.566718  780645 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1006 14:47:49.566834  780645 ssh_runner.go:195] Run: crio --version
	I1006 14:47:49.601887  780645 ssh_runner.go:195] Run: crio --version
	I1006 14:47:49.645889  780645 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1006 14:47:49.647376  780645 main.go:141] libmachine: (pause-670840) Calling .GetIP
	I1006 14:47:49.650826  780645 main.go:141] libmachine: (pause-670840) DBG | domain pause-670840 has defined MAC address 52:54:00:db:88:ba in network mk-pause-670840
	I1006 14:47:49.651277  780645 main.go:141] libmachine: (pause-670840) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:88:ba", ip: ""} in network mk-pause-670840: {Iface:virbr4 ExpiryTime:2025-10-06 15:46:57 +0000 UTC Type:0 Mac:52:54:00:db:88:ba Iaid: IPaddr:192.168.72.41 Prefix:24 Hostname:pause-670840 Clientid:01:52:54:00:db:88:ba}
	I1006 14:47:49.651304  780645 main.go:141] libmachine: (pause-670840) DBG | domain pause-670840 has defined IP address 192.168.72.41 and MAC address 52:54:00:db:88:ba in network mk-pause-670840
	I1006 14:47:49.651721  780645 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1006 14:47:49.657780  780645 kubeadm.go:883] updating cluster {Name:pause-670840 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1
ClusterName:pause-670840 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.41 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidi
a-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 14:47:49.657948  780645 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:47:49.658041  780645 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 14:47:49.716165  780645 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 14:47:49.716200  780645 crio.go:433] Images already preloaded, skipping extraction
	I1006 14:47:49.716266  780645 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 14:47:49.767788  780645 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 14:47:49.767813  780645 cache_images.go:85] Images are preloaded, skipping loading
	I1006 14:47:49.767821  780645 kubeadm.go:934] updating node { 192.168.72.41 8443 v1.34.1 crio true true} ...
	I1006 14:47:49.767996  780645 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-670840 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.41
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-670840 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 14:47:49.768090  780645 ssh_runner.go:195] Run: crio config
	I1006 14:47:49.824306  780645 cni.go:84] Creating CNI manager for ""
	I1006 14:47:49.824344  780645 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1006 14:47:49.824384  780645 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 14:47:49.824424  780645 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.41 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-670840 NodeName:pause-670840 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.41"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.41 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 14:47:49.824678  780645 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.41
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-670840"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.41"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.41"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 14:47:49.824797  780645 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 14:47:49.839381  780645 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 14:47:49.839470  780645 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 14:47:49.855692  780645 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1006 14:47:49.880958  780645 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 14:47:49.907128  780645 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1006 14:47:49.933706  780645 ssh_runner.go:195] Run: grep 192.168.72.41	control-plane.minikube.internal$ /etc/hosts
	I1006 14:47:49.940022  780645 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:47:47.472653  780799 out.go:252] * Restarting existing kvm2 VM for "kubernetes-upgrade-317912" ...
	I1006 14:47:47.472708  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) Calling .Start
	I1006 14:47:47.472919  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) starting domain...
	I1006 14:47:47.473012  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) ensuring networks are active...
	I1006 14:47:47.473881  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) Ensuring network default is active
	I1006 14:47:47.474327  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) Ensuring network mk-kubernetes-upgrade-317912 is active
	I1006 14:47:47.474786  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) getting domain XML...
	I1006 14:47:47.475922  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | starting domain XML:
	I1006 14:47:47.475947  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | <domain type='kvm'>
	I1006 14:47:47.475959  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |   <name>kubernetes-upgrade-317912</name>
	I1006 14:47:47.475967  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |   <uuid>de1160c4-cb0a-4372-9c6d-3a178b57a524</uuid>
	I1006 14:47:47.475976  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |   <memory unit='KiB'>3145728</memory>
	I1006 14:47:47.476013  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I1006 14:47:47.476023  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |   <vcpu placement='static'>2</vcpu>
	I1006 14:47:47.476030  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |   <os>
	I1006 14:47:47.476041  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1006 14:47:47.476051  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |     <boot dev='cdrom'/>
	I1006 14:47:47.476061  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |     <boot dev='hd'/>
	I1006 14:47:47.476071  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |     <bootmenu enable='no'/>
	I1006 14:47:47.476082  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |   </os>
	I1006 14:47:47.476093  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |   <features>
	I1006 14:47:47.476170  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |     <acpi/>
	I1006 14:47:47.476205  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |     <apic/>
	I1006 14:47:47.476225  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |     <pae/>
	I1006 14:47:47.476237  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |   </features>
	I1006 14:47:47.476254  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1006 14:47:47.476265  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |   <clock offset='utc'/>
	I1006 14:47:47.476288  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |   <on_poweroff>destroy</on_poweroff>
	I1006 14:47:47.476300  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |   <on_reboot>restart</on_reboot>
	I1006 14:47:47.476323  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |   <on_crash>destroy</on_crash>
	I1006 14:47:47.476341  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |   <devices>
	I1006 14:47:47.476381  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1006 14:47:47.476403  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |     <disk type='file' device='cdrom'>
	I1006 14:47:47.476419  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |       <driver name='qemu' type='raw'/>
	I1006 14:47:47.476445  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |       <source file='/home/jenkins/minikube-integration/21701-739942/.minikube/machines/kubernetes-upgrade-317912/boot2docker.iso'/>
	I1006 14:47:47.476462  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |       <target dev='hdc' bus='scsi'/>
	I1006 14:47:47.476472  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |       <readonly/>
	I1006 14:47:47.476485  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1006 14:47:47.476496  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |     </disk>
	I1006 14:47:47.476501  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |     <disk type='file' device='disk'>
	I1006 14:47:47.476515  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1006 14:47:47.476530  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |       <source file='/home/jenkins/minikube-integration/21701-739942/.minikube/machines/kubernetes-upgrade-317912/kubernetes-upgrade-317912.rawdisk'/>
	I1006 14:47:47.476563  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |       <target dev='hda' bus='virtio'/>
	I1006 14:47:47.476598  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1006 14:47:47.476615  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |     </disk>
	I1006 14:47:47.476626  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1006 14:47:47.476642  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1006 14:47:47.476656  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |     </controller>
	I1006 14:47:47.476667  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1006 14:47:47.476677  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1006 14:47:47.476688  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1006 14:47:47.476700  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |     </controller>
	I1006 14:47:47.476716  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |     <interface type='network'>
	I1006 14:47:47.476831  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |       <mac address='52:54:00:db:d0:2e'/>
	I1006 14:47:47.476848  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |       <source network='mk-kubernetes-upgrade-317912'/>
	I1006 14:47:47.476858  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |       <model type='virtio'/>
	I1006 14:47:47.476869  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1006 14:47:47.476881  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |     </interface>
	I1006 14:47:47.476888  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |     <interface type='network'>
	I1006 14:47:47.476898  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |       <mac address='52:54:00:b4:89:83'/>
	I1006 14:47:47.476906  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |       <source network='default'/>
	I1006 14:47:47.476916  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |       <model type='virtio'/>
	I1006 14:47:47.476967  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1006 14:47:47.476981  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |     </interface>
	I1006 14:47:47.476988  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |     <serial type='pty'>
	I1006 14:47:47.477021  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |       <target type='isa-serial' port='0'>
	I1006 14:47:47.477033  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |         <model name='isa-serial'/>
	I1006 14:47:47.477050  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |       </target>
	I1006 14:47:47.477064  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |     </serial>
	I1006 14:47:47.477076  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |     <console type='pty'>
	I1006 14:47:47.477095  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |       <target type='serial' port='0'/>
	I1006 14:47:47.477119  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |     </console>
	I1006 14:47:47.477138  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |     <input type='mouse' bus='ps2'/>
	I1006 14:47:47.477151  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |     <input type='keyboard' bus='ps2'/>
	I1006 14:47:47.477184  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |     <audio id='1' type='none'/>
	I1006 14:47:47.477197  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |     <memballoon model='virtio'>
	I1006 14:47:47.477206  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1006 14:47:47.477214  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |     </memballoon>
	I1006 14:47:47.477226  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |     <rng model='virtio'>
	I1006 14:47:47.477239  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |       <backend model='random'>/dev/random</backend>
	I1006 14:47:47.477252  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1006 14:47:47.477265  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |     </rng>
	I1006 14:47:47.477276  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG |   </devices>
	I1006 14:47:47.477287  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | </domain>
	I1006 14:47:47.477295  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | 
	I1006 14:47:47.941523  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) waiting for domain to start...
	I1006 14:47:47.943152  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) domain is now running
	I1006 14:47:47.943177  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) waiting for IP...
	I1006 14:47:47.944321  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | domain kubernetes-upgrade-317912 has defined MAC address 52:54:00:db:d0:2e in network mk-kubernetes-upgrade-317912
	I1006 14:47:47.945238  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) found domain IP: 192.168.39.45
	I1006 14:47:47.945283  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) reserving static IP address...
	I1006 14:47:47.945302  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | domain kubernetes-upgrade-317912 has current primary IP address 192.168.39.45 and MAC address 52:54:00:db:d0:2e in network mk-kubernetes-upgrade-317912
	I1006 14:47:47.945867  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | found host DHCP lease matching {name: "kubernetes-upgrade-317912", mac: "52:54:00:db:d0:2e", ip: "192.168.39.45"} in network mk-kubernetes-upgrade-317912: {Iface:virbr1 ExpiryTime:2025-10-06 15:47:19 +0000 UTC Type:0 Mac:52:54:00:db:d0:2e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:kubernetes-upgrade-317912 Clientid:01:52:54:00:db:d0:2e}
	I1006 14:47:47.945931  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) reserved static IP address 192.168.39.45 for domain kubernetes-upgrade-317912
	I1006 14:47:47.945955  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | skip adding static IP to network mk-kubernetes-upgrade-317912 - found existing host DHCP lease matching {name: "kubernetes-upgrade-317912", mac: "52:54:00:db:d0:2e", ip: "192.168.39.45"}
	I1006 14:47:47.945980  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | Getting to WaitForSSH function...
	I1006 14:47:47.945997  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) waiting for SSH...
	I1006 14:47:47.949136  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | domain kubernetes-upgrade-317912 has defined MAC address 52:54:00:db:d0:2e in network mk-kubernetes-upgrade-317912
	I1006 14:47:47.949563  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:d0:2e", ip: ""} in network mk-kubernetes-upgrade-317912: {Iface:virbr1 ExpiryTime:2025-10-06 15:47:19 +0000 UTC Type:0 Mac:52:54:00:db:d0:2e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:kubernetes-upgrade-317912 Clientid:01:52:54:00:db:d0:2e}
	I1006 14:47:47.949625  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | domain kubernetes-upgrade-317912 has defined IP address 192.168.39.45 and MAC address 52:54:00:db:d0:2e in network mk-kubernetes-upgrade-317912
	I1006 14:47:47.949887  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | Using SSH client type: external
	I1006 14:47:47.949914  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | Using SSH private key: /home/jenkins/minikube-integration/21701-739942/.minikube/machines/kubernetes-upgrade-317912/id_rsa (-rw-------)
	I1006 14:47:47.950053  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.45 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21701-739942/.minikube/machines/kubernetes-upgrade-317912/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1006 14:47:47.950082  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | About to run SSH command:
	I1006 14:47:47.950101  780799 main.go:141] libmachine: (kubernetes-upgrade-317912) DBG | exit 0
	I1006 14:47:50.111937  780645 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 14:47:50.132196  780645 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/pause-670840 for IP: 192.168.72.41
	I1006 14:47:50.132221  780645 certs.go:195] generating shared ca certs ...
	I1006 14:47:50.132237  780645 certs.go:227] acquiring lock for ca certs: {Name:mkac26b60e1fd10143a5d4dc5ca0de64e9dd4f98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:47:50.132434  780645 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-739942/.minikube/ca.key
	I1006 14:47:50.132497  780645 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-739942/.minikube/proxy-client-ca.key
	I1006 14:47:50.132508  780645 certs.go:257] generating profile certs ...
	I1006 14:47:50.132640  780645 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/pause-670840/client.key
	I1006 14:47:50.132730  780645 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/pause-670840/apiserver.key.24981bcd
	I1006 14:47:50.132788  780645 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/pause-670840/proxy-client.key
	I1006 14:47:50.132958  780645 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-739942/.minikube/certs/743851.pem (1338 bytes)
	W1006 14:47:50.132989  780645 certs.go:480] ignoring /home/jenkins/minikube-integration/21701-739942/.minikube/certs/743851_empty.pem, impossibly tiny 0 bytes
	I1006 14:47:50.132997  780645 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-739942/.minikube/certs/ca-key.pem (1679 bytes)
	I1006 14:47:50.133023  780645 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-739942/.minikube/certs/ca.pem (1078 bytes)
	I1006 14:47:50.133052  780645 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-739942/.minikube/certs/cert.pem (1123 bytes)
	I1006 14:47:50.133084  780645 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-739942/.minikube/certs/key.pem (1679 bytes)
	I1006 14:47:50.133135  780645 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-739942/.minikube/files/etc/ssl/certs/7438512.pem (1708 bytes)
	I1006 14:47:50.133955  780645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 14:47:50.169976  780645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1006 14:47:50.205090  780645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 14:47:50.241959  780645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1006 14:47:50.277867  780645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/pause-670840/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1006 14:47:50.310839  780645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/pause-670840/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1006 14:47:50.352120  780645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/pause-670840/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 14:47:50.388700  780645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/pause-670840/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1006 14:47:50.501773  780645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/files/etc/ssl/certs/7438512.pem --> /usr/share/ca-certificates/7438512.pem (1708 bytes)
	I1006 14:47:50.564110  780645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 14:47:50.661959  780645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-739942/.minikube/certs/743851.pem --> /usr/share/ca-certificates/743851.pem (1338 bytes)
	I1006 14:47:50.754090  780645 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 14:47:50.809344  780645 ssh_runner.go:195] Run: openssl version
	I1006 14:47:50.821382  780645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 14:47:50.846163  780645 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:47:50.856996  780645 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 13:50 /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:47:50.857078  780645 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:47:50.874879  780645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 14:47:50.899354  780645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/743851.pem && ln -fs /usr/share/ca-certificates/743851.pem /etc/ssl/certs/743851.pem"
	I1006 14:47:50.920272  780645 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/743851.pem
	I1006 14:47:50.930864  780645 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  6 13:59 /usr/share/ca-certificates/743851.pem
	I1006 14:47:50.930957  780645 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/743851.pem
	I1006 14:47:50.945398  780645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/743851.pem /etc/ssl/certs/51391683.0"
	I1006 14:47:50.967680  780645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7438512.pem && ln -fs /usr/share/ca-certificates/7438512.pem /etc/ssl/certs/7438512.pem"
	I1006 14:47:51.000269  780645 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7438512.pem
	I1006 14:47:51.009946  780645 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  6 13:59 /usr/share/ca-certificates/7438512.pem
	I1006 14:47:51.010040  780645 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7438512.pem
	I1006 14:47:51.021415  780645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7438512.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 14:47:51.036430  780645 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 14:47:51.048362  780645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1006 14:47:51.059802  780645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1006 14:47:51.074755  780645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1006 14:47:51.097221  780645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1006 14:47:51.111515  780645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1006 14:47:51.126076  780645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1006 14:47:51.136261  780645 kubeadm.go:400] StartCluster: {Name:pause-670840 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Cl
usterName:pause-670840 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.41 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-g
pu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:47:51.136437  780645 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 14:47:51.136534  780645 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 14:47:51.213622  780645 cri.go:89] found id: "24692b5695875a0e07b8044479544bd940fa12fb399ecbcfcb42c79741c24af1"
	I1006 14:47:51.213657  780645 cri.go:89] found id: "6f1f73cc4a476e62f0bd839c947f2ab1c2014a3e06e2060ee96869e039e1c125"
	I1006 14:47:51.213664  780645 cri.go:89] found id: "8aeec49d42a25a681a66edb73e04dc51bdd60cc474a3560ed674b9a0c9ba6dc7"
	I1006 14:47:51.213670  780645 cri.go:89] found id: "0dd1035d820529039269bde549155f111bc74b5b3b5019542983cf0d262d42f9"
	I1006 14:47:51.213675  780645 cri.go:89] found id: "f0ca3c7483e87d53990c734308aedb63f5b38fb6a25bfd03e22d7dec5a050cfb"
	I1006 14:47:51.213680  780645 cri.go:89] found id: "3b13529d4e4132c35fa76f6df0347178f1c6dc37e51ffc1fd1f6cd6c4d317d1e"
	I1006 14:47:51.213685  780645 cri.go:89] found id: "97e6c2494ec684f14cf5a4ab45bd825b7029c38692f2aabffd6254a6b52403a8"
	I1006 14:47:51.213689  780645 cri.go:89] found id: ""
	I1006 14:47:51.213757  780645 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-670840 -n pause-670840
helpers_test.go:269: (dbg) Run:  kubectl --context pause-670840 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (42.09s)

                                                
                                    

Test pass (275/321)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 7.21
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.16
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.1/json-events 3.5
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.07
18 TestDownloadOnly/v1.34.1/DeleteAll 0.16
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.67
22 TestOffline 62.74
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 142.94
31 TestAddons/serial/GCPAuth/Namespaces 0.14
32 TestAddons/serial/GCPAuth/FakeCredentials 10.59
35 TestAddons/parallel/Registry 16.92
36 TestAddons/parallel/RegistryCreds 0.8
38 TestAddons/parallel/InspektorGadget 5.45
39 TestAddons/parallel/MetricsServer 6.69
41 TestAddons/parallel/CSI 62.38
42 TestAddons/parallel/Headlamp 19.96
43 TestAddons/parallel/CloudSpanner 6.91
44 TestAddons/parallel/LocalPath 12.25
45 TestAddons/parallel/NvidiaDevicePlugin 5.93
46 TestAddons/parallel/Yakd 12.33
48 TestAddons/StoppedEnableDisable 85.78
49 TestCertOptions 53.72
50 TestCertExpiration 301.81
52 TestForceSystemdFlag 63
53 TestForceSystemdEnv 62.16
55 TestKVMDriverInstallOrUpdate 0.61
59 TestErrorSpam/setup 41.64
60 TestErrorSpam/start 0.37
61 TestErrorSpam/status 0.82
62 TestErrorSpam/pause 1.83
63 TestErrorSpam/unpause 1.89
64 TestErrorSpam/stop 89.99
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 59.02
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 29.28
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.1
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.34
76 TestFunctional/serial/CacheCmd/cache/add_local 1.24
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.79
81 TestFunctional/serial/CacheCmd/cache/delete 0.11
82 TestFunctional/serial/MinikubeKubectlCmd 0.12
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
84 TestFunctional/serial/ExtraConfig 33.2
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.55
87 TestFunctional/serial/LogsFileCmd 1.58
88 TestFunctional/serial/InvalidService 4.09
90 TestFunctional/parallel/ConfigCmd 0.37
91 TestFunctional/parallel/DashboardCmd 14.42
92 TestFunctional/parallel/DryRun 0.3
93 TestFunctional/parallel/InternationalLanguage 0.14
94 TestFunctional/parallel/StatusCmd 1.08
98 TestFunctional/parallel/ServiceCmdConnect 18.6
99 TestFunctional/parallel/AddonsCmd 0.18
100 TestFunctional/parallel/PersistentVolumeClaim 29.53
102 TestFunctional/parallel/SSHCmd 0.51
103 TestFunctional/parallel/CpCmd 1.7
104 TestFunctional/parallel/MySQL 23.24
105 TestFunctional/parallel/FileSync 0.24
106 TestFunctional/parallel/CertSync 1.58
110 TestFunctional/parallel/NodeLabels 0.08
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.6
114 TestFunctional/parallel/License 0.33
115 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
121 TestFunctional/parallel/ImageCommands/ImageBuild 3.4
122 TestFunctional/parallel/ImageCommands/Setup 0.51
128 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
129 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
130 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
133 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.14
134 TestFunctional/parallel/Version/short 0.05
135 TestFunctional/parallel/Version/components 0.52
136 TestFunctional/parallel/ProfileCmd/profile_not_create 0.55
137 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.51
138 TestFunctional/parallel/ProfileCmd/profile_list 0.48
139 TestFunctional/parallel/ProfileCmd/profile_json_output 0.61
140 TestFunctional/parallel/ServiceCmd/DeployApp 56.26
141 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.33
142 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.78
143 TestFunctional/parallel/ImageCommands/ImageRemove 2.83
144 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 5.57
145 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.11
146 TestFunctional/parallel/MountCmd/any-port 13.68
148 TestFunctional/parallel/MountCmd/VerifyCleanup 1.31
149 TestFunctional/parallel/ServiceCmd/List 1.24
150 TestFunctional/parallel/ServiceCmd/JSONOutput 1.25
151 TestFunctional/parallel/ServiceCmd/HTTPS 0.3
152 TestFunctional/parallel/ServiceCmd/Format 0.3
153 TestFunctional/parallel/ServiceCmd/URL 0.32
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 223.83
162 TestMultiControlPlane/serial/DeployApp 7.49
163 TestMultiControlPlane/serial/PingHostFromPods 1.34
164 TestMultiControlPlane/serial/AddWorkerNode 43.83
165 TestMultiControlPlane/serial/NodeLabels 0.08
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.91
167 TestMultiControlPlane/serial/CopyFile 13.85
168 TestMultiControlPlane/serial/StopSecondaryNode 81.09
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.71
170 TestMultiControlPlane/serial/RestartSecondaryNode 38.73
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.08
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 304.78
173 TestMultiControlPlane/serial/DeleteSecondaryNode 18.58
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.7
175 TestMultiControlPlane/serial/StopCluster 256.04
176 TestMultiControlPlane/serial/RestartCluster 103.3
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.68
178 TestMultiControlPlane/serial/AddSecondaryNode 90.28
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.93
183 TestJSONOutput/start/Command 59.66
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.8
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.71
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 7.22
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.23
211 TestMainNoArgs 0.06
212 TestMinikubeProfile 82.59
215 TestMountStart/serial/StartWithMountFirst 20.63
216 TestMountStart/serial/VerifyMountFirst 0.39
217 TestMountStart/serial/StartWithMountSecond 23.53
218 TestMountStart/serial/VerifyMountSecond 0.39
219 TestMountStart/serial/DeleteFirst 0.59
220 TestMountStart/serial/VerifyMountPostDelete 0.39
221 TestMountStart/serial/Stop 1.24
222 TestMountStart/serial/RestartStopped 19.61
223 TestMountStart/serial/VerifyMountPostStop 0.39
226 TestMultiNode/serial/FreshStart2Nodes 99.94
227 TestMultiNode/serial/DeployApp2Nodes 5.24
228 TestMultiNode/serial/PingHostFrom2Pods 0.83
229 TestMultiNode/serial/AddNode 43.37
230 TestMultiNode/serial/MultiNodeLabels 0.07
231 TestMultiNode/serial/ProfileList 0.62
232 TestMultiNode/serial/CopyFile 7.8
233 TestMultiNode/serial/StopNode 2.62
234 TestMultiNode/serial/StartAfterStop 37.2
235 TestMultiNode/serial/RestartKeepsNodes 289.92
236 TestMultiNode/serial/DeleteNode 2.8
237 TestMultiNode/serial/StopMultiNode 162.07
238 TestMultiNode/serial/RestartMultiNode 86.79
239 TestMultiNode/serial/ValidateNameConflict 40.25
246 TestScheduledStopUnix 110.54
250 TestRunningBinaryUpgrade 154.68
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
256 TestNoKubernetes/serial/StartWithK8s 84.29
264 TestNetworkPlugins/group/false 4.09
268 TestNoKubernetes/serial/StartWithStopK8s 49.87
269 TestNoKubernetes/serial/Start 42.95
271 TestPause/serial/Start 78.41
272 TestNoKubernetes/serial/VerifyK8sNotRunning 0.24
282 TestStoppedBinaryUpgrade/Setup 0.61
283 TestStoppedBinaryUpgrade/Upgrade 81.91
284 TestNetworkPlugins/group/auto/Start 62.1
285 TestStoppedBinaryUpgrade/MinikubeLogs 1.26
286 TestNetworkPlugins/group/kindnet/Start 63.47
287 TestNetworkPlugins/group/auto/KubeletFlags 0.23
288 TestNetworkPlugins/group/auto/NetCatPod 10.32
289 TestNetworkPlugins/group/auto/DNS 0.2
290 TestNetworkPlugins/group/auto/Localhost 0.17
291 TestNetworkPlugins/group/auto/HairPin 0.17
292 TestNetworkPlugins/group/calico/Start 81.4
293 TestNetworkPlugins/group/custom-flannel/Start 82.82
294 TestNetworkPlugins/group/kindnet/ControllerPod 6.05
295 TestNetworkPlugins/group/kindnet/KubeletFlags 0.36
296 TestNetworkPlugins/group/kindnet/NetCatPod 11.17
297 TestNetworkPlugins/group/kindnet/DNS 0.2
298 TestNetworkPlugins/group/kindnet/Localhost 0.17
299 TestNetworkPlugins/group/kindnet/HairPin 0.17
300 TestNetworkPlugins/group/calico/ControllerPod 6.01
301 TestNetworkPlugins/group/enable-default-cni/Start 62.53
302 TestNetworkPlugins/group/calico/KubeletFlags 0.24
303 TestNetworkPlugins/group/calico/NetCatPod 13.32
304 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.22
305 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.25
306 TestNetworkPlugins/group/calico/DNS 0.22
307 TestNetworkPlugins/group/calico/Localhost 0.18
308 TestNetworkPlugins/group/calico/HairPin 0.17
309 TestNetworkPlugins/group/custom-flannel/DNS 0.19
310 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
311 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
312 TestNetworkPlugins/group/flannel/Start 75.44
313 TestNetworkPlugins/group/bridge/Start 78.91
314 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
315 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.33
316 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
317 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
318 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
320 TestStartStop/group/old-k8s-version/serial/FirstStart 60.76
321 TestNetworkPlugins/group/flannel/ControllerPod 6.01
322 TestNetworkPlugins/group/flannel/KubeletFlags 0.26
323 TestNetworkPlugins/group/flannel/NetCatPod 13.28
324 TestNetworkPlugins/group/bridge/KubeletFlags 0.52
325 TestNetworkPlugins/group/bridge/NetCatPod 10.52
326 TestNetworkPlugins/group/flannel/DNS 0.2
327 TestNetworkPlugins/group/flannel/Localhost 0.16
328 TestNetworkPlugins/group/flannel/HairPin 0.15
329 TestNetworkPlugins/group/bridge/DNS 0.17
330 TestNetworkPlugins/group/bridge/Localhost 0.13
331 TestNetworkPlugins/group/bridge/HairPin 0.13
333 TestStartStop/group/no-preload/serial/FirstStart 76.63
335 TestStartStop/group/embed-certs/serial/FirstStart 77.94
336 TestStartStop/group/old-k8s-version/serial/DeployApp 9.36
337 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.22
338 TestStartStop/group/old-k8s-version/serial/Stop 87.11
339 TestStartStop/group/no-preload/serial/DeployApp 10.3
340 TestStartStop/group/embed-certs/serial/DeployApp 9.33
341 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.06
342 TestStartStop/group/no-preload/serial/Stop 80.45
343 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.02
344 TestStartStop/group/embed-certs/serial/Stop 84.82
345 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
346 TestStartStop/group/old-k8s-version/serial/SecondStart 46.57
347 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 8.01
348 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
349 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
350 TestStartStop/group/old-k8s-version/serial/Pause 2.88
352 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 62.36
353 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
354 TestStartStop/group/no-preload/serial/SecondStart 78.86
355 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
356 TestStartStop/group/embed-certs/serial/SecondStart 73.3
357 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.57
358 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.27
359 TestStartStop/group/default-k8s-diff-port/serial/Stop 81.39
360 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
361 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
362 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
363 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.08
364 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
365 TestStartStop/group/no-preload/serial/Pause 3.09
366 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
367 TestStartStop/group/embed-certs/serial/Pause 3.1
369 TestStartStop/group/newest-cni/serial/FirstStart 43.24
370 TestStartStop/group/newest-cni/serial/DeployApp 0
371 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.09
372 TestStartStop/group/newest-cni/serial/Stop 86.44
373 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
374 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 46.08
375 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
376 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
377 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
378 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.95
379 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
380 TestStartStop/group/newest-cni/serial/SecondStart 33.91
381 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
382 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
383 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
384 TestStartStop/group/newest-cni/serial/Pause 3.32
x
+
TestDownloadOnly/v1.28.0/json-events (7.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-985373 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-985373 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (7.205007285s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (7.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1006 13:50:19.999454  743851 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1006 13:50:19.999558  743851 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-739942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-985373
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-985373: exit status 85 (66.787245ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                                ARGS                                                                                                 │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-985373 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-985373 │ jenkins │ v1.37.0 │ 06 Oct 25 13:50 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 13:50:12
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 13:50:12.840947  743863 out.go:360] Setting OutFile to fd 1 ...
	I1006 13:50:12.841091  743863 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 13:50:12.841104  743863 out.go:374] Setting ErrFile to fd 2...
	I1006 13:50:12.841111  743863 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 13:50:12.841372  743863 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-739942/.minikube/bin
	W1006 13:50:12.841517  743863 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21701-739942/.minikube/config/config.json: open /home/jenkins/minikube-integration/21701-739942/.minikube/config/config.json: no such file or directory
	I1006 13:50:12.842024  743863 out.go:368] Setting JSON to true
	I1006 13:50:12.843112  743863 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":12764,"bootTime":1759745849,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 13:50:12.843214  743863 start.go:140] virtualization: kvm guest
	I1006 13:50:12.845649  743863 out.go:99] [download-only-985373] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1006 13:50:12.845817  743863 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21701-739942/.minikube/cache/preloaded-tarball: no such file or directory
	I1006 13:50:12.845876  743863 notify.go:220] Checking for updates...
	I1006 13:50:12.847552  743863 out.go:171] MINIKUBE_LOCATION=21701
	I1006 13:50:12.849406  743863 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 13:50:12.850932  743863 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21701-739942/kubeconfig
	I1006 13:50:12.852412  743863 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-739942/.minikube
	I1006 13:50:12.853922  743863 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1006 13:50:12.856697  743863 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1006 13:50:12.856958  743863 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 13:50:12.888214  743863 out.go:99] Using the kvm2 driver based on user configuration
	I1006 13:50:12.888251  743863 start.go:304] selected driver: kvm2
	I1006 13:50:12.888258  743863 start.go:924] validating driver "kvm2" against <nil>
	I1006 13:50:12.888630  743863 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 13:50:12.888729  743863 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21701-739942/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1006 13:50:12.904099  743863 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1006 13:50:12.904159  743863 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21701-739942/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1006 13:50:12.919503  743863 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1006 13:50:12.919557  743863 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1006 13:50:12.920134  743863 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1006 13:50:12.920308  743863 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1006 13:50:12.920335  743863 cni.go:84] Creating CNI manager for ""
	I1006 13:50:12.920385  743863 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1006 13:50:12.920398  743863 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1006 13:50:12.920466  743863 start.go:348] cluster config:
	{Name:download-only-985373 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-985373 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 13:50:12.920703  743863 iso.go:125] acquiring lock: {Name:mk8de6812bb58933af0bc6eb1d955bf118a3bcec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 13:50:12.923286  743863 out.go:99] Downloading VM boot image ...
	I1006 13:50:12.923342  743863 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21701-739942/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I1006 13:50:16.410649  743863 out.go:99] Starting "download-only-985373" primary control-plane node in "download-only-985373" cluster
	I1006 13:50:16.410691  743863 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1006 13:50:16.430153  743863 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1006 13:50:16.430199  743863 cache.go:58] Caching tarball of preloaded images
	I1006 13:50:16.430408  743863 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1006 13:50:16.432203  743863 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1006 13:50:16.432236  743863 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1006 13:50:16.463995  743863 preload.go:290] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1006 13:50:16.464123  743863 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21701-739942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-985373 host does not exist
	  To start a cluster, run: "minikube start -p download-only-985373"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-985373
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (3.5s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-672709 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-672709 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (3.498945156s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (3.50s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1006 13:50:23.879095  743851 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1006 13:50:23.879144  743851 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-739942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-672709
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-672709: exit status 85 (69.731771ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                ARGS                                                                                                 │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-985373 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-985373 │ jenkins │ v1.37.0 │ 06 Oct 25 13:50 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                               │ minikube             │ jenkins │ v1.37.0 │ 06 Oct 25 13:50 UTC │ 06 Oct 25 13:50 UTC │
	│ delete  │ -p download-only-985373                                                                                                                                                                             │ download-only-985373 │ jenkins │ v1.37.0 │ 06 Oct 25 13:50 UTC │ 06 Oct 25 13:50 UTC │
	│ start   │ -o=json --download-only -p download-only-672709 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-672709 │ jenkins │ v1.37.0 │ 06 Oct 25 13:50 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 13:50:20
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 13:50:20.425324  744054 out.go:360] Setting OutFile to fd 1 ...
	I1006 13:50:20.425582  744054 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 13:50:20.425604  744054 out.go:374] Setting ErrFile to fd 2...
	I1006 13:50:20.425608  744054 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 13:50:20.425791  744054 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-739942/.minikube/bin
	I1006 13:50:20.426259  744054 out.go:368] Setting JSON to true
	I1006 13:50:20.427149  744054 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":12771,"bootTime":1759745849,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 13:50:20.427249  744054 start.go:140] virtualization: kvm guest
	I1006 13:50:20.429086  744054 out.go:99] [download-only-672709] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1006 13:50:20.429279  744054 notify.go:220] Checking for updates...
	I1006 13:50:20.430570  744054 out.go:171] MINIKUBE_LOCATION=21701
	I1006 13:50:20.432233  744054 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 13:50:20.433709  744054 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21701-739942/kubeconfig
	I1006 13:50:20.435127  744054 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-739942/.minikube
	I1006 13:50:20.436483  744054 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-672709 host does not exist
	  To start a cluster, run: "minikube start -p download-only-672709"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-672709
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.67s)

                                                
                                                
=== RUN   TestBinaryMirror
I1006 13:50:24.544984  743851 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-278171 --alsologtostderr --binary-mirror http://127.0.0.1:43617 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
helpers_test.go:175: Cleaning up "binary-mirror-278171" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-278171
--- PASS: TestBinaryMirror (0.67s)

                                                
                                    
x
+
TestOffline (62.74s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-388777 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-388777 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m1.861682537s)
helpers_test.go:175: Cleaning up "offline-crio-388777" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-388777
--- PASS: TestOffline (62.74s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-395535
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-395535: exit status 85 (58.518919ms)

                                                
                                                
-- stdout --
	* Profile "addons-395535" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-395535"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-395535
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-395535: exit status 85 (58.885444ms)

                                                
                                                
-- stdout --
	* Profile "addons-395535" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-395535"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (142.94s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-395535 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-395535 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m22.936432283s)
--- PASS: TestAddons/Setup (142.94s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-395535 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-395535 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.59s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-395535 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-395535 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [5379af33-1084-493d-a8bf-f3ad31a70aeb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [5379af33-1084-493d-a8bf-f3ad31a70aeb] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.005100884s
addons_test.go:694: (dbg) Run:  kubectl --context addons-395535 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-395535 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-395535 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.59s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.92s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 8.596175ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-6wslm" [ac13d99a-af77-4a4d-ad44-a574c23cb352] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.009054794s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-kh2xs" [2b9dfc63-a725-49d8-a06d-8607e45aacbd] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.005142062s
addons_test.go:392: (dbg) Run:  kubectl --context addons-395535 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-395535 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-395535 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.986721105s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-395535 ip
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-395535 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.92s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.8s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 14.728493ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-395535
addons_test.go:332: (dbg) Run:  kubectl --context addons-395535 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-395535 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.80s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.45s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-rmw8b" [eaee675f-3751-4815-9217-b41c12d637a9] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.007427003s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-395535 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (5.45s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.69s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 8.857629ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-zdqg2" [2c5c0f60-39b7-49e4-9308-804e749198d4] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.01001585s
addons_test.go:463: (dbg) Run:  kubectl --context addons-395535 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-395535 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-395535 addons disable metrics-server --alsologtostderr -v=1: (1.576762442s)
--- PASS: TestAddons/parallel/MetricsServer (6.69s)

                                                
                                    
x
+
TestAddons/parallel/CSI (62.38s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1006 13:53:13.682443  743851 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1006 13:53:13.690115  743851 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1006 13:53:13.690143  743851 kapi.go:107] duration metric: took 7.705547ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 7.716848ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-395535 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-395535 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-395535 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-395535 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-395535 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-395535 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-395535 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-395535 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-395535 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-395535 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-395535 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-395535 get pvc hpvc -o jsonpath={.status.phase} -n default
2025/10/06 13:53:23 [DEBUG] GET http://192.168.39.36:5000
helpers_test.go:402: (dbg) Run:  kubectl --context addons-395535 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-395535 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-395535 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-395535 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-395535 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-395535 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-395535 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [b4946fb3-2764-4559-abd4-09fd0d63269c] Pending
helpers_test.go:352: "task-pv-pod" [b4946fb3-2764-4559-abd4-09fd0d63269c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [b4946fb3-2764-4559-abd4-09fd0d63269c] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.004518899s
addons_test.go:572: (dbg) Run:  kubectl --context addons-395535 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-395535 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-395535 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-395535 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-395535 delete pod task-pv-pod: (1.186683511s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-395535 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-395535 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-395535 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-395535 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-395535 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-395535 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-395535 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-395535 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-395535 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-395535 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-395535 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-395535 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-395535 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-395535 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-395535 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-395535 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [fdb98279-8a7a-44fe-9b1c-691dd491b7be] Pending
helpers_test.go:352: "task-pv-pod-restore" [fdb98279-8a7a-44fe-9b1c-691dd491b7be] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [fdb98279-8a7a-44fe-9b1c-691dd491b7be] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.005844122s
addons_test.go:614: (dbg) Run:  kubectl --context addons-395535 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-395535 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-395535 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-395535 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-395535 addons disable volumesnapshots --alsologtostderr -v=1: (1.106370009s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-395535 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-395535 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.008191656s)
--- PASS: TestAddons/parallel/CSI (62.38s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.96s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-395535 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-395535 --alsologtostderr -v=1: (1.004445803s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-85f8f8dc54-9dxd2" [1681d25b-bd2a-4860-aa32-4fb8e5393cf1] Pending
helpers_test.go:352: "headlamp-85f8f8dc54-9dxd2" [1681d25b-bd2a-4860-aa32-4fb8e5393cf1] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-85f8f8dc54-9dxd2" [1681d25b-bd2a-4860-aa32-4fb8e5393cf1] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.008393095s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-395535 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-395535 addons disable headlamp --alsologtostderr -v=1: (5.950676432s)
--- PASS: TestAddons/parallel/Headlamp (19.96s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.91s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-85f6b7fc65-nsppl" [78837362-d7b2-43bd-b678-f45c8b436180] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.007041049s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-395535 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.91s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (12.25s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-395535 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-395535 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-395535 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-395535 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-395535 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-395535 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-395535 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-395535 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [d1ecddee-c93c-4fbe-88c8-e96e42c9c272] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [d1ecddee-c93c-4fbe-88c8-e96e42c9c272] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [d1ecddee-c93c-4fbe-88c8-e96e42c9c272] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.00368421s
addons_test.go:967: (dbg) Run:  kubectl --context addons-395535 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-395535 ssh "cat /opt/local-path-provisioner/pvc-7d09967f-9c88-48c9-87eb-fc54c796f56b_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-395535 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-395535 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-395535 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (12.25s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.93s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-grxdz" [64007d43-4ee6-4ad1-8000-d38b65a402e2] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.010315929s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-395535 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.93s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.33s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-sjztx" [685c9ffe-e299-42d9-9a23-afb113066dca] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.00526198s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-395535 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-395535 addons disable yakd --alsologtostderr -v=1: (6.323126368s)
--- PASS: TestAddons/parallel/Yakd (12.33s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (85.78s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-395535
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-395535: (1m25.49069148s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-395535
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-395535
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-395535
--- PASS: TestAddons/StoppedEnableDisable (85.78s)

                                                
                                    
x
+
TestCertOptions (53.72s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-809645 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-809645 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (52.425697664s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-809645 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-809645 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-809645 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-809645" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-809645
--- PASS: TestCertOptions (53.72s)

                                                
                                    
x
+
TestCertExpiration (301.81s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-435206 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-435206 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m9.294844971s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-435206 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-435206 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (51.762370392s)
helpers_test.go:175: Cleaning up "cert-expiration-435206" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-435206
--- PASS: TestCertExpiration (301.81s)

                                                
                                    
x
+
TestForceSystemdFlag (63s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-640885 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-640885 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m2.020639586s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-640885 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-640885" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-640885
--- PASS: TestForceSystemdFlag (63.00s)

                                                
                                    
x
+
TestForceSystemdEnv (62.16s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-455420 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-455420 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m1.337363912s)
helpers_test.go:175: Cleaning up "force-systemd-env-455420" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-455420
--- PASS: TestForceSystemdEnv (62.16s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0.61s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1006 14:45:52.223398  743851 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1006 14:45:52.223626  743851 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate2991677477/001:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1006 14:45:52.264162  743851 install.go:163] /tmp/TestKVMDriverInstallOrUpdate2991677477/001/docker-machine-driver-kvm2 version is 1.1.1
W1006 14:45:52.264225  743851 install.go:76] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.37.0
W1006 14:45:52.264485  743851 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1006 14:45:52.264578  743851 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2991677477/001/docker-machine-driver-kvm2
I1006 14:45:52.673730  743851 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate2991677477/001:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1006 14:45:52.694772  743851 install.go:163] /tmp/TestKVMDriverInstallOrUpdate2991677477/001/docker-machine-driver-kvm2 version is 1.37.0
--- PASS: TestKVMDriverInstallOrUpdate (0.61s)

                                                
                                    
x
+
TestErrorSpam/setup (41.64s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-216058 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-216058 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1006 13:57:48.931817  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/addons-395535/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 13:57:48.938341  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/addons-395535/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 13:57:48.949781  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/addons-395535/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 13:57:48.971290  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/addons-395535/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 13:57:49.012795  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/addons-395535/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 13:57:49.094299  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/addons-395535/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 13:57:49.255930  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/addons-395535/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 13:57:49.577727  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/addons-395535/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 13:57:50.219887  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/addons-395535/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 13:57:51.501975  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/addons-395535/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 13:57:54.063802  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/addons-395535/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 13:57:59.186029  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/addons-395535/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 13:58:09.427812  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/addons-395535/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-216058 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-216058 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (41.643894943s)
--- PASS: TestErrorSpam/setup (41.64s)

                                                
                                    
x
+
TestErrorSpam/start (0.37s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-216058 --log_dir /tmp/nospam-216058 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-216058 --log_dir /tmp/nospam-216058 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-216058 --log_dir /tmp/nospam-216058 start --dry-run
--- PASS: TestErrorSpam/start (0.37s)

                                                
                                    
x
+
TestErrorSpam/status (0.82s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-216058 --log_dir /tmp/nospam-216058 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-216058 --log_dir /tmp/nospam-216058 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-216058 --log_dir /tmp/nospam-216058 status
--- PASS: TestErrorSpam/status (0.82s)

                                                
                                    
x
+
TestErrorSpam/pause (1.83s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-216058 --log_dir /tmp/nospam-216058 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-216058 --log_dir /tmp/nospam-216058 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-216058 --log_dir /tmp/nospam-216058 pause
--- PASS: TestErrorSpam/pause (1.83s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.89s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-216058 --log_dir /tmp/nospam-216058 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-216058 --log_dir /tmp/nospam-216058 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-216058 --log_dir /tmp/nospam-216058 unpause
--- PASS: TestErrorSpam/unpause (1.89s)

                                                
                                    
x
+
TestErrorSpam/stop (89.99s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-216058 --log_dir /tmp/nospam-216058 stop
E1006 13:58:29.910126  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/addons-395535/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 13:59:10.873456  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/addons-395535/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-216058 --log_dir /tmp/nospam-216058 stop: (1m26.575778843s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-216058 --log_dir /tmp/nospam-216058 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-216058 --log_dir /tmp/nospam-216058 stop: (1.40943046s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-216058 --log_dir /tmp/nospam-216058 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-216058 --log_dir /tmp/nospam-216058 stop: (2.008422152s)
--- PASS: TestErrorSpam/stop (89.99s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21701-739942/.minikube/files/etc/test/nested/copy/743851/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (59.02s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-561811 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1006 14:00:32.797877  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/addons-395535/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-561811 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (59.015066s)
--- PASS: TestFunctional/serial/StartWithProxy (59.02s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.28s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1006 14:00:46.054110  743851 config.go:182] Loaded profile config "functional-561811": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-561811 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-561811 --alsologtostderr -v=8: (29.281375434s)
functional_test.go:678: soft start took 29.282226277s for "functional-561811" cluster.
I1006 14:01:15.335926  743851 config.go:182] Loaded profile config "functional-561811": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (29.28s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-561811 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-561811 cache add registry.k8s.io/pause:3.1: (1.091413104s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-561811 cache add registry.k8s.io/pause:3.3: (1.139097393s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-561811 cache add registry.k8s.io/pause:latest: (1.111689563s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-561811 /tmp/TestFunctionalserialCacheCmdcacheadd_local1741751464/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 cache add minikube-local-cache-test:functional-561811
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 cache delete minikube-local-cache-test:functional-561811
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-561811
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.79s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-561811 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (238.443724ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-amd64 -p functional-561811 cache reload: (1.032867336s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.79s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 kubectl -- --context functional-561811 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-561811 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.2s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-561811 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-561811 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.19950099s)
functional_test.go:776: restart took 33.199636154s for "functional-561811" cluster.
I1006 14:01:55.733895  743851 config.go:182] Loaded profile config "functional-561811": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (33.20s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-561811 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.55s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-561811 logs: (1.545355066s)
--- PASS: TestFunctional/serial/LogsCmd (1.55s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.58s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 logs --file /tmp/TestFunctionalserialLogsFileCmd3096261241/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-561811 logs --file /tmp/TestFunctionalserialLogsFileCmd3096261241/001/logs.txt: (1.573901113s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.58s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.09s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-561811 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-561811
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-561811: exit status 115 (314.617961ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.208:30750 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-561811 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.09s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-561811 config get cpus: exit status 14 (60.96328ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-561811 config get cpus: exit status 14 (59.414738ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-561811 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-561811 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 752104: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.42s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-561811 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-561811 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 23 (154.447461ms)

                                                
                                                
-- stdout --
	* [functional-561811] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21701
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21701-739942/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-739942/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 14:02:27.725180  751992 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:02:27.725500  751992 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:02:27.725514  751992 out.go:374] Setting ErrFile to fd 2...
	I1006 14:02:27.725521  751992 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:02:27.725874  751992 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-739942/.minikube/bin
	I1006 14:02:27.726554  751992 out.go:368] Setting JSON to false
	I1006 14:02:27.727970  751992 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":13499,"bootTime":1759745849,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 14:02:27.728108  751992 start.go:140] virtualization: kvm guest
	I1006 14:02:27.730124  751992 out.go:179] * [functional-561811] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1006 14:02:27.731561  751992 notify.go:220] Checking for updates...
	I1006 14:02:27.731582  751992 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 14:02:27.733029  751992 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 14:02:27.734795  751992 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-739942/kubeconfig
	I1006 14:02:27.736371  751992 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-739942/.minikube
	I1006 14:02:27.741407  751992 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1006 14:02:27.743097  751992 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 14:02:27.744835  751992 config.go:182] Loaded profile config "functional-561811": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:02:27.745410  751992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 14:02:27.745488  751992 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 14:02:27.760638  751992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42549
	I1006 14:02:27.761290  751992 main.go:141] libmachine: () Calling .GetVersion
	I1006 14:02:27.762007  751992 main.go:141] libmachine: Using API Version  1
	I1006 14:02:27.762035  751992 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 14:02:27.762400  751992 main.go:141] libmachine: () Calling .GetMachineName
	I1006 14:02:27.762662  751992 main.go:141] libmachine: (functional-561811) Calling .DriverName
	I1006 14:02:27.762937  751992 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 14:02:27.763407  751992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 14:02:27.763458  751992 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 14:02:27.777691  751992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35237
	I1006 14:02:27.778155  751992 main.go:141] libmachine: () Calling .GetVersion
	I1006 14:02:27.778679  751992 main.go:141] libmachine: Using API Version  1
	I1006 14:02:27.778707  751992 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 14:02:27.779066  751992 main.go:141] libmachine: () Calling .GetMachineName
	I1006 14:02:27.779384  751992 main.go:141] libmachine: (functional-561811) Calling .DriverName
	I1006 14:02:27.812789  751992 out.go:179] * Using the kvm2 driver based on existing profile
	I1006 14:02:27.814242  751992 start.go:304] selected driver: kvm2
	I1006 14:02:27.814276  751992 start.go:924] validating driver "kvm2" against &{Name:functional-561811 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-561811 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:02:27.814399  751992 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 14:02:27.816652  751992 out.go:203] 
	W1006 14:02:27.818284  751992 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1006 14:02:27.819614  751992 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-561811 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
--- PASS: TestFunctional/parallel/DryRun (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-561811 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-561811 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 23 (139.11147ms)

                                                
                                                
-- stdout --
	* [functional-561811] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21701
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21701-739942/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-739942/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 14:02:24.832673  751692 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:02:24.832954  751692 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:02:24.832966  751692 out.go:374] Setting ErrFile to fd 2...
	I1006 14:02:24.832970  751692 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:02:24.833332  751692 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-739942/.minikube/bin
	I1006 14:02:24.833823  751692 out.go:368] Setting JSON to false
	I1006 14:02:24.834793  751692 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":13496,"bootTime":1759745849,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 14:02:24.834902  751692 start.go:140] virtualization: kvm guest
	I1006 14:02:24.836916  751692 out.go:179] * [functional-561811] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1006 14:02:24.838320  751692 notify.go:220] Checking for updates...
	I1006 14:02:24.838356  751692 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 14:02:24.839859  751692 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 14:02:24.841355  751692 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-739942/kubeconfig
	I1006 14:02:24.842700  751692 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-739942/.minikube
	I1006 14:02:24.843976  751692 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1006 14:02:24.845303  751692 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 14:02:24.846841  751692 config.go:182] Loaded profile config "functional-561811": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:02:24.847314  751692 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 14:02:24.847391  751692 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 14:02:24.861992  751692 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44887
	I1006 14:02:24.862527  751692 main.go:141] libmachine: () Calling .GetVersion
	I1006 14:02:24.863127  751692 main.go:141] libmachine: Using API Version  1
	I1006 14:02:24.863151  751692 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 14:02:24.863574  751692 main.go:141] libmachine: () Calling .GetMachineName
	I1006 14:02:24.863814  751692 main.go:141] libmachine: (functional-561811) Calling .DriverName
	I1006 14:02:24.864140  751692 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 14:02:24.864626  751692 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 14:02:24.864680  751692 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 14:02:24.878579  751692 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39447
	I1006 14:02:24.879090  751692 main.go:141] libmachine: () Calling .GetVersion
	I1006 14:02:24.879761  751692 main.go:141] libmachine: Using API Version  1
	I1006 14:02:24.879804  751692 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 14:02:24.880230  751692 main.go:141] libmachine: () Calling .GetMachineName
	I1006 14:02:24.880474  751692 main.go:141] libmachine: (functional-561811) Calling .DriverName
	I1006 14:02:24.912639  751692 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1006 14:02:24.913777  751692 start.go:304] selected driver: kvm2
	I1006 14:02:24.913792  751692 start.go:924] validating driver "kvm2" against &{Name:functional-561811 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-561811 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:02:24.913920  751692 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 14:02:24.915989  751692 out.go:203] 
	W1006 14:02:24.917169  751692 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1006 14:02:24.918442  751692 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (18.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-561811 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-561811 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-fb4gx" [a1aa7d4e-a36e-4ef0-82c8-a8cbfce09192] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-fb4gx" [a1aa7d4e-a36e-4ef0-82c8-a8cbfce09192] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 18.005959646s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.208:30226
functional_test.go:1680: http://192.168.39.208:30226: success! body:
Request served by hello-node-connect-7d85dfc575-fb4gx

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.208:30226
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (18.60s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (29.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [03002329-7894-4224-a4e1-1680b2740cf6] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.005882819s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-561811 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-561811 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-561811 get pvc myclaim -o=json
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-561811 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-561811 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [595b89ed-8d39-42d0-86e6-10499c756b70] Pending
helpers_test.go:352: "sp-pod" [595b89ed-8d39-42d0-86e6-10499c756b70] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [595b89ed-8d39-42d0-86e6-10499c756b70] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.007833517s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-561811 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-561811 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-561811 delete -f testdata/storage-provisioner/pod.yaml: (2.987053643s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-561811 apply -f testdata/storage-provisioner/pod.yaml
I1006 14:02:41.177876  743851 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [280db172-2cd5-42d8-bd07-38b54be66940] Pending
helpers_test.go:352: "sp-pod" [280db172-2cd5-42d8-bd07-38b54be66940] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [280db172-2cd5-42d8-bd07-38b54be66940] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004844931s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-561811 exec sp-pod -- ls /tmp/mount
E1006 14:02:48.923426  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/addons-395535/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (29.53s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 ssh -n functional-561811 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 cp functional-561811:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4227487146/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 ssh -n functional-561811 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 ssh -n functional-561811 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (23.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-561811 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-hc5zh" [2f7b5c64-8740-4eb1-b164-5f663df0d9b9] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-hc5zh" [2f7b5c64-8740-4eb1-b164-5f663df0d9b9] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.007279629s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-561811 exec mysql-5bb876957f-hc5zh -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-561811 exec mysql-5bb876957f-hc5zh -- mysql -ppassword -e "show databases;": exit status 1 (183.978608ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1006 14:02:23.777971  743851 retry.go:31] will retry after 517.304286ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-561811 exec mysql-5bb876957f-hc5zh -- mysql -ppassword -e "show databases;"
I1006 14:02:24.305304  743851 retry.go:31] will retry after 1.301302522s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:94f6a35b-d367-46fd-85d4-b82e67e72ea2 ResourceVersion:727 Generation:0 CreationTimestamp:2025-10-06 14:02:23 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName:pvc-94f6a35b-d367-46fd-85d4-b82e67e72ea2 StorageClassName:0xc0016609b0 VolumeMode:0xc0016609c0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-561811 exec mysql-5bb876957f-hc5zh -- mysql -ppassword -e "show databases;": exit status 1 (140.117483ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1006 14:02:24.436090  743851 retry.go:31] will retry after 1.960898992s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-561811 exec mysql-5bb876957f-hc5zh -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (23.24s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/743851/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 ssh "sudo cat /etc/test/nested/copy/743851/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/743851.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 ssh "sudo cat /etc/ssl/certs/743851.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/743851.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 ssh "sudo cat /usr/share/ca-certificates/743851.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/7438512.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 ssh "sudo cat /etc/ssl/certs/7438512.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/7438512.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 ssh "sudo cat /usr/share/ca-certificates/7438512.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-561811 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-561811 ssh "sudo systemctl is-active docker": exit status 1 (290.590842ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-561811 ssh "sudo systemctl is-active containerd": exit status 1 (306.432851ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-561811 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-561811
localhost/kicbase/echo-server:functional-561811
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-561811 image ls --format short --alsologtostderr:
I1006 14:02:43.296227  752526 out.go:360] Setting OutFile to fd 1 ...
I1006 14:02:43.296481  752526 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1006 14:02:43.296489  752526 out.go:374] Setting ErrFile to fd 2...
I1006 14:02:43.296493  752526 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1006 14:02:43.296721  752526 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-739942/.minikube/bin
I1006 14:02:43.297313  752526 config.go:182] Loaded profile config "functional-561811": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1006 14:02:43.297402  752526 config.go:182] Loaded profile config "functional-561811": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1006 14:02:43.297799  752526 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1006 14:02:43.297853  752526 main.go:141] libmachine: Launching plugin server for driver kvm2
I1006 14:02:43.314277  752526 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34381
I1006 14:02:43.314843  752526 main.go:141] libmachine: () Calling .GetVersion
I1006 14:02:43.315538  752526 main.go:141] libmachine: Using API Version  1
I1006 14:02:43.315562  752526 main.go:141] libmachine: () Calling .SetConfigRaw
I1006 14:02:43.315985  752526 main.go:141] libmachine: () Calling .GetMachineName
I1006 14:02:43.316282  752526 main.go:141] libmachine: (functional-561811) Calling .GetState
I1006 14:02:43.318727  752526 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1006 14:02:43.318783  752526 main.go:141] libmachine: Launching plugin server for driver kvm2
I1006 14:02:43.332438  752526 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37883
I1006 14:02:43.332937  752526 main.go:141] libmachine: () Calling .GetVersion
I1006 14:02:43.333473  752526 main.go:141] libmachine: Using API Version  1
I1006 14:02:43.333497  752526 main.go:141] libmachine: () Calling .SetConfigRaw
I1006 14:02:43.333867  752526 main.go:141] libmachine: () Calling .GetMachineName
I1006 14:02:43.334061  752526 main.go:141] libmachine: (functional-561811) Calling .DriverName
I1006 14:02:43.334270  752526 ssh_runner.go:195] Run: systemctl --version
I1006 14:02:43.334310  752526 main.go:141] libmachine: (functional-561811) Calling .GetSSHHostname
I1006 14:02:43.337972  752526 main.go:141] libmachine: (functional-561811) DBG | domain functional-561811 has defined MAC address 52:54:00:c9:42:d5 in network mk-functional-561811
I1006 14:02:43.338499  752526 main.go:141] libmachine: (functional-561811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:42:d5", ip: ""} in network mk-functional-561811: {Iface:virbr1 ExpiryTime:2025-10-06 15:00:02 +0000 UTC Type:0 Mac:52:54:00:c9:42:d5 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:functional-561811 Clientid:01:52:54:00:c9:42:d5}
I1006 14:02:43.338541  752526 main.go:141] libmachine: (functional-561811) DBG | domain functional-561811 has defined IP address 192.168.39.208 and MAC address 52:54:00:c9:42:d5 in network mk-functional-561811
I1006 14:02:43.338683  752526 main.go:141] libmachine: (functional-561811) Calling .GetSSHPort
I1006 14:02:43.338858  752526 main.go:141] libmachine: (functional-561811) Calling .GetSSHKeyPath
I1006 14:02:43.339029  752526 main.go:141] libmachine: (functional-561811) Calling .GetSSHUsername
I1006 14:02:43.339214  752526 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/functional-561811/id_rsa Username:docker}
I1006 14:02:43.420608  752526 ssh_runner.go:195] Run: sudo crictl images --output json
I1006 14:02:43.477282  752526 main.go:141] libmachine: Making call to close driver server
I1006 14:02:43.477296  752526 main.go:141] libmachine: (functional-561811) Calling .Close
I1006 14:02:43.477649  752526 main.go:141] libmachine: Successfully made call to close driver server
I1006 14:02:43.477722  752526 main.go:141] libmachine: (functional-561811) DBG | Closing plugin on server side
I1006 14:02:43.477744  752526 main.go:141] libmachine: Making call to close connection to plugin binary
I1006 14:02:43.477753  752526 main.go:141] libmachine: Making call to close driver server
I1006 14:02:43.477762  752526 main.go:141] libmachine: (functional-561811) Calling .Close
I1006 14:02:43.478063  752526 main.go:141] libmachine: Successfully made call to close driver server
I1006 14:02:43.478110  752526 main.go:141] libmachine: Making call to close connection to plugin binary
I1006 14:02:43.478082  752526 main.go:141] libmachine: (functional-561811) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-561811 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ localhost/minikube-local-cache-test     │ functional-561811  │ 4f0f1e5c6bbe7 │ 3.33kB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ localhost/my-image                      │ functional-561811  │ a85581bb39753 │ 1.47MB │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ docker.io/library/nginx                 │ latest             │ 203ad09fc1566 │ 197MB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-561811  │ 9056ab77afb8e │ 4.94MB │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-561811 image ls --format table --alsologtostderr:
I1006 14:02:47.387920  752750 out.go:360] Setting OutFile to fd 1 ...
I1006 14:02:47.388195  752750 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1006 14:02:47.388207  752750 out.go:374] Setting ErrFile to fd 2...
I1006 14:02:47.388212  752750 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1006 14:02:47.388440  752750 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-739942/.minikube/bin
I1006 14:02:47.389078  752750 config.go:182] Loaded profile config "functional-561811": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1006 14:02:47.389200  752750 config.go:182] Loaded profile config "functional-561811": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1006 14:02:47.389657  752750 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1006 14:02:47.389735  752750 main.go:141] libmachine: Launching plugin server for driver kvm2
I1006 14:02:47.404610  752750 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41703
I1006 14:02:47.405174  752750 main.go:141] libmachine: () Calling .GetVersion
I1006 14:02:47.405785  752750 main.go:141] libmachine: Using API Version  1
I1006 14:02:47.405822  752750 main.go:141] libmachine: () Calling .SetConfigRaw
I1006 14:02:47.406312  752750 main.go:141] libmachine: () Calling .GetMachineName
I1006 14:02:47.406648  752750 main.go:141] libmachine: (functional-561811) Calling .GetState
I1006 14:02:47.409167  752750 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1006 14:02:47.409213  752750 main.go:141] libmachine: Launching plugin server for driver kvm2
I1006 14:02:47.423547  752750 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35873
I1006 14:02:47.424089  752750 main.go:141] libmachine: () Calling .GetVersion
I1006 14:02:47.424522  752750 main.go:141] libmachine: Using API Version  1
I1006 14:02:47.424545  752750 main.go:141] libmachine: () Calling .SetConfigRaw
I1006 14:02:47.425110  752750 main.go:141] libmachine: () Calling .GetMachineName
I1006 14:02:47.425429  752750 main.go:141] libmachine: (functional-561811) Calling .DriverName
I1006 14:02:47.425684  752750 ssh_runner.go:195] Run: systemctl --version
I1006 14:02:47.425718  752750 main.go:141] libmachine: (functional-561811) Calling .GetSSHHostname
I1006 14:02:47.429076  752750 main.go:141] libmachine: (functional-561811) DBG | domain functional-561811 has defined MAC address 52:54:00:c9:42:d5 in network mk-functional-561811
I1006 14:02:47.429582  752750 main.go:141] libmachine: (functional-561811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:42:d5", ip: ""} in network mk-functional-561811: {Iface:virbr1 ExpiryTime:2025-10-06 15:00:02 +0000 UTC Type:0 Mac:52:54:00:c9:42:d5 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:functional-561811 Clientid:01:52:54:00:c9:42:d5}
I1006 14:02:47.429627  752750 main.go:141] libmachine: (functional-561811) DBG | domain functional-561811 has defined IP address 192.168.39.208 and MAC address 52:54:00:c9:42:d5 in network mk-functional-561811
I1006 14:02:47.429800  752750 main.go:141] libmachine: (functional-561811) Calling .GetSSHPort
I1006 14:02:47.429979  752750 main.go:141] libmachine: (functional-561811) Calling .GetSSHKeyPath
I1006 14:02:47.430175  752750 main.go:141] libmachine: (functional-561811) Calling .GetSSHUsername
I1006 14:02:47.430395  752750 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/functional-561811/id_rsa Username:docker}
I1006 14:02:47.509011  752750 ssh_runner.go:195] Run: sudo crictl images --output json
I1006 14:02:47.559179  752750 main.go:141] libmachine: Making call to close driver server
I1006 14:02:47.559205  752750 main.go:141] libmachine: (functional-561811) Calling .Close
I1006 14:02:47.559497  752750 main.go:141] libmachine: Successfully made call to close driver server
I1006 14:02:47.559518  752750 main.go:141] libmachine: Making call to close connection to plugin binary
I1006 14:02:47.559527  752750 main.go:141] libmachine: Making call to close driver server
I1006 14:02:47.559560  752750 main.go:141] libmachine: (functional-561811) DBG | Closing plugin on server side
I1006 14:02:47.559606  752750 main.go:141] libmachine: (functional-561811) Calling .Close
I1006 14:02:47.559881  752750 main.go:141] libmachine: Successfully made call to close driver server
I1006 14:02:47.559898  752750 main.go:141] libmachine: Making call to close connection to plugin binary
I1006 14:02:47.559883  752750 main.go:141] libmachine: (functional-561811) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-561811 image ls --format json --alsologtostderr:
[{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-561811"],"size":"4943877"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b4610899694
49f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e217
8d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f35609
2ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"e7ce735b412a5dafa15c44d97f3d69b602c2933ef336a0e8c4a3a4a84592ee52","repoDigests":["docker.io/library/fc6a0ecc411c94ee6c115677a66626f30502dd4015ea1ef2c51d4227e970e104-tmp@sha256:99772c91e6bc0634056d960905cf46c35c0ceda1ff06b406de5f14b75e6d39dc"],"repoTags":[],"size":"1466018"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-mini
kube/storage-provisioner:v5"],"size":"31470524"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245
c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"203ad09fc1566a329c1d2af8d1f219b28fd2c00b69e743bd572b7f662365432d","repoDigests":["docker.io/library/nginx@sha256:17ae566734b63632e543c907ba74757e0c1a25d812ab9f10a07a6bed98dd199c","docker.io/library/nginx@sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc"],"repoTags":["docker.io/library/nginx:latest"],"size":"196550530"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags"
:["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"4f0f1e5c6bbe739e34d24ddee211da1418c2cf85f2bea8741a039efc3b9940ae","repoDigests":["localhost/minikube-local-cache-test@sha256:26e3be159041613dc2da3eb61229f1fd6b68b1fe7f49a3c36a0ccdd8a1618697"],"repoTags":["localhost/minikube-local-cache-test:functional-561811"],"size":"3330"},{"id":"a85581bb39753051d1ed75f36ff812a773bc5ed362fba9886e19b1c47b3eb05f","repoDigests":["localhost/my-image@sha256:1976d31599fb0b240b94399f2efb8dfd4e6138cce3c7f269ac87a345cdd79686"],"repoTags":["localhost/my-image:functional-561811"],"size":"1468600"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d8
7648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-561811 image ls --format json --alsologtostderr:
I1006 14:02:47.165962  752726 out.go:360] Setting OutFile to fd 1 ...
I1006 14:02:47.166250  752726 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1006 14:02:47.166258  752726 out.go:374] Setting ErrFile to fd 2...
I1006 14:02:47.166263  752726 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1006 14:02:47.166453  752726 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-739942/.minikube/bin
I1006 14:02:47.167043  752726 config.go:182] Loaded profile config "functional-561811": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1006 14:02:47.167163  752726 config.go:182] Loaded profile config "functional-561811": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1006 14:02:47.167529  752726 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1006 14:02:47.167579  752726 main.go:141] libmachine: Launching plugin server for driver kvm2
I1006 14:02:47.182186  752726 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40553
I1006 14:02:47.182806  752726 main.go:141] libmachine: () Calling .GetVersion
I1006 14:02:47.183483  752726 main.go:141] libmachine: Using API Version  1
I1006 14:02:47.183512  752726 main.go:141] libmachine: () Calling .SetConfigRaw
I1006 14:02:47.183959  752726 main.go:141] libmachine: () Calling .GetMachineName
I1006 14:02:47.184204  752726 main.go:141] libmachine: (functional-561811) Calling .GetState
I1006 14:02:47.186531  752726 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1006 14:02:47.186610  752726 main.go:141] libmachine: Launching plugin server for driver kvm2
I1006 14:02:47.200555  752726 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36911
I1006 14:02:47.201095  752726 main.go:141] libmachine: () Calling .GetVersion
I1006 14:02:47.201615  752726 main.go:141] libmachine: Using API Version  1
I1006 14:02:47.201662  752726 main.go:141] libmachine: () Calling .SetConfigRaw
I1006 14:02:47.202046  752726 main.go:141] libmachine: () Calling .GetMachineName
I1006 14:02:47.202255  752726 main.go:141] libmachine: (functional-561811) Calling .DriverName
I1006 14:02:47.202441  752726 ssh_runner.go:195] Run: systemctl --version
I1006 14:02:47.202473  752726 main.go:141] libmachine: (functional-561811) Calling .GetSSHHostname
I1006 14:02:47.205690  752726 main.go:141] libmachine: (functional-561811) DBG | domain functional-561811 has defined MAC address 52:54:00:c9:42:d5 in network mk-functional-561811
I1006 14:02:47.206182  752726 main.go:141] libmachine: (functional-561811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:42:d5", ip: ""} in network mk-functional-561811: {Iface:virbr1 ExpiryTime:2025-10-06 15:00:02 +0000 UTC Type:0 Mac:52:54:00:c9:42:d5 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:functional-561811 Clientid:01:52:54:00:c9:42:d5}
I1006 14:02:47.206229  752726 main.go:141] libmachine: (functional-561811) DBG | domain functional-561811 has defined IP address 192.168.39.208 and MAC address 52:54:00:c9:42:d5 in network mk-functional-561811
I1006 14:02:47.206378  752726 main.go:141] libmachine: (functional-561811) Calling .GetSSHPort
I1006 14:02:47.206549  752726 main.go:141] libmachine: (functional-561811) Calling .GetSSHKeyPath
I1006 14:02:47.206707  752726 main.go:141] libmachine: (functional-561811) Calling .GetSSHUsername
I1006 14:02:47.206872  752726 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/functional-561811/id_rsa Username:docker}
I1006 14:02:47.286852  752726 ssh_runner.go:195] Run: sudo crictl images --output json
I1006 14:02:47.333400  752726 main.go:141] libmachine: Making call to close driver server
I1006 14:02:47.333417  752726 main.go:141] libmachine: (functional-561811) Calling .Close
I1006 14:02:47.333777  752726 main.go:141] libmachine: Successfully made call to close driver server
I1006 14:02:47.333798  752726 main.go:141] libmachine: Making call to close connection to plugin binary
I1006 14:02:47.333806  752726 main.go:141] libmachine: Making call to close driver server
I1006 14:02:47.333810  752726 main.go:141] libmachine: (functional-561811) DBG | Closing plugin on server side
I1006 14:02:47.333814  752726 main.go:141] libmachine: (functional-561811) Calling .Close
I1006 14:02:47.334174  752726 main.go:141] libmachine: Successfully made call to close driver server
I1006 14:02:47.334188  752726 main.go:141] libmachine: (functional-561811) DBG | Closing plugin on server side
I1006 14:02:47.334194  752726 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-561811 image ls --format yaml --alsologtostderr:
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 203ad09fc1566a329c1d2af8d1f219b28fd2c00b69e743bd572b7f662365432d
repoDigests:
- docker.io/library/nginx@sha256:17ae566734b63632e543c907ba74757e0c1a25d812ab9f10a07a6bed98dd199c
- docker.io/library/nginx@sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc
repoTags:
- docker.io/library/nginx:latest
size: "196550530"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-561811
size: "4943877"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 4f0f1e5c6bbe739e34d24ddee211da1418c2cf85f2bea8741a039efc3b9940ae
repoDigests:
- localhost/minikube-local-cache-test@sha256:26e3be159041613dc2da3eb61229f1fd6b68b1fe7f49a3c36a0ccdd8a1618697
repoTags:
- localhost/minikube-local-cache-test:functional-561811
size: "3330"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-561811 image ls --format yaml --alsologtostderr:
I1006 14:02:43.532760  752550 out.go:360] Setting OutFile to fd 1 ...
I1006 14:02:43.532988  752550 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1006 14:02:43.532996  752550 out.go:374] Setting ErrFile to fd 2...
I1006 14:02:43.533000  752550 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1006 14:02:43.533207  752550 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-739942/.minikube/bin
I1006 14:02:43.533847  752550 config.go:182] Loaded profile config "functional-561811": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1006 14:02:43.533934  752550 config.go:182] Loaded profile config "functional-561811": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1006 14:02:43.534295  752550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1006 14:02:43.534359  752550 main.go:141] libmachine: Launching plugin server for driver kvm2
I1006 14:02:43.547872  752550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41361
I1006 14:02:43.548402  752550 main.go:141] libmachine: () Calling .GetVersion
I1006 14:02:43.549014  752550 main.go:141] libmachine: Using API Version  1
I1006 14:02:43.549061  752550 main.go:141] libmachine: () Calling .SetConfigRaw
I1006 14:02:43.549525  752550 main.go:141] libmachine: () Calling .GetMachineName
I1006 14:02:43.549833  752550 main.go:141] libmachine: (functional-561811) Calling .GetState
I1006 14:02:43.552389  752550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1006 14:02:43.552436  752550 main.go:141] libmachine: Launching plugin server for driver kvm2
I1006 14:02:43.566826  752550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34021
I1006 14:02:43.567344  752550 main.go:141] libmachine: () Calling .GetVersion
I1006 14:02:43.567909  752550 main.go:141] libmachine: Using API Version  1
I1006 14:02:43.567932  752550 main.go:141] libmachine: () Calling .SetConfigRaw
I1006 14:02:43.568433  752550 main.go:141] libmachine: () Calling .GetMachineName
I1006 14:02:43.568716  752550 main.go:141] libmachine: (functional-561811) Calling .DriverName
I1006 14:02:43.568998  752550 ssh_runner.go:195] Run: systemctl --version
I1006 14:02:43.569039  752550 main.go:141] libmachine: (functional-561811) Calling .GetSSHHostname
I1006 14:02:43.572180  752550 main.go:141] libmachine: (functional-561811) DBG | domain functional-561811 has defined MAC address 52:54:00:c9:42:d5 in network mk-functional-561811
I1006 14:02:43.572554  752550 main.go:141] libmachine: (functional-561811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:42:d5", ip: ""} in network mk-functional-561811: {Iface:virbr1 ExpiryTime:2025-10-06 15:00:02 +0000 UTC Type:0 Mac:52:54:00:c9:42:d5 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:functional-561811 Clientid:01:52:54:00:c9:42:d5}
I1006 14:02:43.572604  752550 main.go:141] libmachine: (functional-561811) DBG | domain functional-561811 has defined IP address 192.168.39.208 and MAC address 52:54:00:c9:42:d5 in network mk-functional-561811
I1006 14:02:43.572797  752550 main.go:141] libmachine: (functional-561811) Calling .GetSSHPort
I1006 14:02:43.573021  752550 main.go:141] libmachine: (functional-561811) Calling .GetSSHKeyPath
I1006 14:02:43.573191  752550 main.go:141] libmachine: (functional-561811) Calling .GetSSHUsername
I1006 14:02:43.573402  752550 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/functional-561811/id_rsa Username:docker}
I1006 14:02:43.661034  752550 ssh_runner.go:195] Run: sudo crictl images --output json
I1006 14:02:43.710413  752550 main.go:141] libmachine: Making call to close driver server
I1006 14:02:43.710427  752550 main.go:141] libmachine: (functional-561811) Calling .Close
I1006 14:02:43.710773  752550 main.go:141] libmachine: Successfully made call to close driver server
I1006 14:02:43.710795  752550 main.go:141] libmachine: Making call to close connection to plugin binary
I1006 14:02:43.710805  752550 main.go:141] libmachine: Making call to close driver server
I1006 14:02:43.710813  752550 main.go:141] libmachine: (functional-561811) Calling .Close
I1006 14:02:43.711075  752550 main.go:141] libmachine: Successfully made call to close driver server
I1006 14:02:43.711093  752550 main.go:141] libmachine: Making call to close connection to plugin binary
I1006 14:02:43.711114  752550 main.go:141] libmachine: (functional-561811) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-561811 ssh pgrep buildkitd: exit status 1 (201.246883ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 image build -t localhost/my-image:functional-561811 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-561811 image build -t localhost/my-image:functional-561811 testdata/build --alsologtostderr: (2.969914088s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-561811 image build -t localhost/my-image:functional-561811 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> e7ce735b412
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-561811
--> a85581bb397
Successfully tagged localhost/my-image:functional-561811
a85581bb39753051d1ed75f36ff812a773bc5ed362fba9886e19b1c47b3eb05f
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-561811 image build -t localhost/my-image:functional-561811 testdata/build --alsologtostderr:
I1006 14:02:43.971551  752648 out.go:360] Setting OutFile to fd 1 ...
I1006 14:02:43.971860  752648 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1006 14:02:43.971872  752648 out.go:374] Setting ErrFile to fd 2...
I1006 14:02:43.971876  752648 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1006 14:02:43.972193  752648 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-739942/.minikube/bin
I1006 14:02:43.972874  752648 config.go:182] Loaded profile config "functional-561811": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1006 14:02:43.973756  752648 config.go:182] Loaded profile config "functional-561811": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1006 14:02:43.974110  752648 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1006 14:02:43.974151  752648 main.go:141] libmachine: Launching plugin server for driver kvm2
I1006 14:02:43.988380  752648 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38441
I1006 14:02:43.989017  752648 main.go:141] libmachine: () Calling .GetVersion
I1006 14:02:43.989672  752648 main.go:141] libmachine: Using API Version  1
I1006 14:02:43.989697  752648 main.go:141] libmachine: () Calling .SetConfigRaw
I1006 14:02:43.990146  752648 main.go:141] libmachine: () Calling .GetMachineName
I1006 14:02:43.990418  752648 main.go:141] libmachine: (functional-561811) Calling .GetState
I1006 14:02:43.992524  752648 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1006 14:02:43.992600  752648 main.go:141] libmachine: Launching plugin server for driver kvm2
I1006 14:02:44.006897  752648 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40909
I1006 14:02:44.007478  752648 main.go:141] libmachine: () Calling .GetVersion
I1006 14:02:44.008001  752648 main.go:141] libmachine: Using API Version  1
I1006 14:02:44.008053  752648 main.go:141] libmachine: () Calling .SetConfigRaw
I1006 14:02:44.008445  752648 main.go:141] libmachine: () Calling .GetMachineName
I1006 14:02:44.008694  752648 main.go:141] libmachine: (functional-561811) Calling .DriverName
I1006 14:02:44.008921  752648 ssh_runner.go:195] Run: systemctl --version
I1006 14:02:44.008947  752648 main.go:141] libmachine: (functional-561811) Calling .GetSSHHostname
I1006 14:02:44.012429  752648 main.go:141] libmachine: (functional-561811) DBG | domain functional-561811 has defined MAC address 52:54:00:c9:42:d5 in network mk-functional-561811
I1006 14:02:44.012916  752648 main.go:141] libmachine: (functional-561811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:42:d5", ip: ""} in network mk-functional-561811: {Iface:virbr1 ExpiryTime:2025-10-06 15:00:02 +0000 UTC Type:0 Mac:52:54:00:c9:42:d5 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:functional-561811 Clientid:01:52:54:00:c9:42:d5}
I1006 14:02:44.012950  752648 main.go:141] libmachine: (functional-561811) DBG | domain functional-561811 has defined IP address 192.168.39.208 and MAC address 52:54:00:c9:42:d5 in network mk-functional-561811
I1006 14:02:44.013145  752648 main.go:141] libmachine: (functional-561811) Calling .GetSSHPort
I1006 14:02:44.013430  752648 main.go:141] libmachine: (functional-561811) Calling .GetSSHKeyPath
I1006 14:02:44.013677  752648 main.go:141] libmachine: (functional-561811) Calling .GetSSHUsername
I1006 14:02:44.013877  752648 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/functional-561811/id_rsa Username:docker}
I1006 14:02:44.094568  752648 build_images.go:161] Building image from path: /tmp/build.3415971719.tar
I1006 14:02:44.094684  752648 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1006 14:02:44.109228  752648 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3415971719.tar
I1006 14:02:44.116299  752648 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3415971719.tar: stat -c "%s %y" /var/lib/minikube/build/build.3415971719.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3415971719.tar': No such file or directory
I1006 14:02:44.116365  752648 ssh_runner.go:362] scp /tmp/build.3415971719.tar --> /var/lib/minikube/build/build.3415971719.tar (3072 bytes)
I1006 14:02:44.152280  752648 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3415971719
I1006 14:02:44.166284  752648 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3415971719 -xf /var/lib/minikube/build/build.3415971719.tar
I1006 14:02:44.179310  752648 crio.go:315] Building image: /var/lib/minikube/build/build.3415971719
I1006 14:02:44.179407  752648 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-561811 /var/lib/minikube/build/build.3415971719 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1006 14:02:46.850327  752648 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-561811 /var/lib/minikube/build/build.3415971719 --cgroup-manager=cgroupfs: (2.670885928s)
I1006 14:02:46.850395  752648 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3415971719
I1006 14:02:46.867418  752648 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3415971719.tar
I1006 14:02:46.882122  752648 build_images.go:217] Built localhost/my-image:functional-561811 from /tmp/build.3415971719.tar
I1006 14:02:46.882177  752648 build_images.go:133] succeeded building to: functional-561811
I1006 14:02:46.882184  752648 build_images.go:134] failed building to: 
I1006 14:02:46.882259  752648 main.go:141] libmachine: Making call to close driver server
I1006 14:02:46.882290  752648 main.go:141] libmachine: (functional-561811) Calling .Close
I1006 14:02:46.882606  752648 main.go:141] libmachine: Successfully made call to close driver server
I1006 14:02:46.882630  752648 main.go:141] libmachine: Making call to close connection to plugin binary
I1006 14:02:46.882637  752648 main.go:141] libmachine: Making call to close driver server
I1006 14:02:46.882644  752648 main.go:141] libmachine: (functional-561811) Calling .Close
I1006 14:02:46.882908  752648 main.go:141] libmachine: Successfully made call to close driver server
I1006 14:02:46.882922  752648 main.go:141] libmachine: Making call to close connection to plugin binary
I1006 14:02:46.882946  752648 main.go:141] libmachine: (functional-561811) DBG | Closing plugin on server side
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-561811
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 image load --daemon kicbase/echo-server:functional-561811 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-561811 image load --daemon kicbase/echo-server:functional-561811 --alsologtostderr: (1.735418236s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.14s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 image load --daemon kicbase/echo-server:functional-561811 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-amd64 -p functional-561811 image load --daemon kicbase/echo-server:functional-561811 --alsologtostderr: (1.017525255s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "402.422106ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "73.188339ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "549.499732ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "61.032451ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (56.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-561811 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-561811 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-6pvmg" [1e73f449-e04e-47f1-bb4d-0608608ee572] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-6pvmg" [1e73f449-e04e-47f1-bb4d-0608608ee572] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 56.004053845s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (56.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-561811
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 image load --daemon kicbase/echo-server:functional-561811 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 image save kicbase/echo-server:functional-561811 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (2.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 image rm kicbase/echo-server:functional-561811 --alsologtostderr
functional_test.go:407: (dbg) Done: out/minikube-linux-amd64 -p functional-561811 image rm kicbase/echo-server:functional-561811 --alsologtostderr: (2.575132906s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (2.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (5.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:424: (dbg) Done: out/minikube-linux-amd64 -p functional-561811 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (5.298196737s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (5.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-561811
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 image save --daemon kicbase/echo-server:functional-561811 --alsologtostderr
functional_test.go:439: (dbg) Done: out/minikube-linux-amd64 -p functional-561811 image save --daemon kicbase/echo-server:functional-561811 --alsologtostderr: (1.068252522s)
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-561811
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (13.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-561811 /tmp/TestFunctionalparallelMountCmdany-port1512625097/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1759759344925300189" to /tmp/TestFunctionalparallelMountCmdany-port1512625097/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1759759344925300189" to /tmp/TestFunctionalparallelMountCmdany-port1512625097/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1759759344925300189" to /tmp/TestFunctionalparallelMountCmdany-port1512625097/001/test-1759759344925300189
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-561811 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (197.892165ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1006 14:02:25.123518  743851 retry.go:31] will retry after 561.465125ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 ssh "findmnt -T /mount-9p | grep 9p"
I1006 14:02:25.809384  743851 detect.go:223] nested VM detected
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  6 14:02 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  6 14:02 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  6 14:02 test-1759759344925300189
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 ssh cat /mount-9p/test-1759759344925300189
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-561811 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [354e1818-42ef-4030-9258-699189e50a17] Pending
helpers_test.go:352: "busybox-mount" [354e1818-42ef-4030-9258-699189e50a17] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [354e1818-42ef-4030-9258-699189e50a17] Running
helpers_test.go:352: "busybox-mount" [354e1818-42ef-4030-9258-699189e50a17] Running / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [354e1818-42ef-4030-9258-699189e50a17] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 11.00421846s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-561811 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-561811 /tmp/TestFunctionalparallelMountCmdany-port1512625097/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (13.68s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-561811 /tmp/TestFunctionalparallelMountCmdVerifyCleanup385836192/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-561811 /tmp/TestFunctionalparallelMountCmdVerifyCleanup385836192/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-561811 /tmp/TestFunctionalparallelMountCmdVerifyCleanup385836192/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-561811 ssh "findmnt -T" /mount1: exit status 1 (225.209021ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1006 14:02:52.050893  743851 retry.go:31] will retry after 440.995742ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-561811 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-561811 /tmp/TestFunctionalparallelMountCmdVerifyCleanup385836192/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-561811 /tmp/TestFunctionalparallelMountCmdVerifyCleanup385836192/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-561811 /tmp/TestFunctionalparallelMountCmdVerifyCleanup385836192/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-561811 service list: (1.242229215s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-561811 service list -o json: (1.247073394s)
functional_test.go:1504: Took "1.247167393s" to run "out/minikube-linux-amd64 -p functional-561811 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.208:30946
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-561811 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.208:30946
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.32s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-561811
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-561811
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-561811
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (223.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1006 14:03:16.639850  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/addons-395535/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-825024 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (3m43.054250538s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (223.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-825024 kubectl -- rollout status deployment/busybox: (5.119948257s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 kubectl -- exec busybox-7b57f96db7-6cdgq -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 kubectl -- exec busybox-7b57f96db7-6rx6n -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 kubectl -- exec busybox-7b57f96db7-vgtwr -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 kubectl -- exec busybox-7b57f96db7-6cdgq -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 kubectl -- exec busybox-7b57f96db7-6rx6n -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 kubectl -- exec busybox-7b57f96db7-vgtwr -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 kubectl -- exec busybox-7b57f96db7-6cdgq -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 kubectl -- exec busybox-7b57f96db7-6rx6n -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 kubectl -- exec busybox-7b57f96db7-vgtwr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 kubectl -- exec busybox-7b57f96db7-6cdgq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 kubectl -- exec busybox-7b57f96db7-6cdgq -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 kubectl -- exec busybox-7b57f96db7-6rx6n -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 kubectl -- exec busybox-7b57f96db7-6rx6n -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 kubectl -- exec busybox-7b57f96db7-vgtwr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 kubectl -- exec busybox-7b57f96db7-vgtwr -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (43.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 node add --alsologtostderr -v 5
E1006 14:07:03.586537  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/functional-561811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:07:03.593006  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/functional-561811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:07:03.604448  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/functional-561811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:07:03.625985  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/functional-561811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:07:03.667512  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/functional-561811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:07:03.748982  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/functional-561811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:07:03.910584  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/functional-561811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:07:04.232150  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/functional-561811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:07:04.874055  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/functional-561811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:07:06.156263  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/functional-561811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:07:08.717776  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/functional-561811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:07:13.840176  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/functional-561811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:07:24.081656  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/functional-561811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-825024 node add --alsologtostderr -v 5: (42.919917945s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (43.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-825024 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E1006 14:07:44.563423  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/functional-561811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 cp testdata/cp-test.txt ha-825024:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 ssh -n ha-825024 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 cp ha-825024:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1782517435/001/cp-test_ha-825024.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 ssh -n ha-825024 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 cp ha-825024:/home/docker/cp-test.txt ha-825024-m02:/home/docker/cp-test_ha-825024_ha-825024-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 ssh -n ha-825024 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 ssh -n ha-825024-m02 "sudo cat /home/docker/cp-test_ha-825024_ha-825024-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 cp ha-825024:/home/docker/cp-test.txt ha-825024-m03:/home/docker/cp-test_ha-825024_ha-825024-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 ssh -n ha-825024 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 ssh -n ha-825024-m03 "sudo cat /home/docker/cp-test_ha-825024_ha-825024-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 cp ha-825024:/home/docker/cp-test.txt ha-825024-m04:/home/docker/cp-test_ha-825024_ha-825024-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 ssh -n ha-825024 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 ssh -n ha-825024-m04 "sudo cat /home/docker/cp-test_ha-825024_ha-825024-m04.txt"
E1006 14:07:48.923136  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/addons-395535/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 cp testdata/cp-test.txt ha-825024-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 ssh -n ha-825024-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 cp ha-825024-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1782517435/001/cp-test_ha-825024-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 ssh -n ha-825024-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 cp ha-825024-m02:/home/docker/cp-test.txt ha-825024:/home/docker/cp-test_ha-825024-m02_ha-825024.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 ssh -n ha-825024-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 ssh -n ha-825024 "sudo cat /home/docker/cp-test_ha-825024-m02_ha-825024.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 cp ha-825024-m02:/home/docker/cp-test.txt ha-825024-m03:/home/docker/cp-test_ha-825024-m02_ha-825024-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 ssh -n ha-825024-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 ssh -n ha-825024-m03 "sudo cat /home/docker/cp-test_ha-825024-m02_ha-825024-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 cp ha-825024-m02:/home/docker/cp-test.txt ha-825024-m04:/home/docker/cp-test_ha-825024-m02_ha-825024-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 ssh -n ha-825024-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 ssh -n ha-825024-m04 "sudo cat /home/docker/cp-test_ha-825024-m02_ha-825024-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 cp testdata/cp-test.txt ha-825024-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 ssh -n ha-825024-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 cp ha-825024-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1782517435/001/cp-test_ha-825024-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 ssh -n ha-825024-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 cp ha-825024-m03:/home/docker/cp-test.txt ha-825024:/home/docker/cp-test_ha-825024-m03_ha-825024.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 ssh -n ha-825024-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 ssh -n ha-825024 "sudo cat /home/docker/cp-test_ha-825024-m03_ha-825024.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 cp ha-825024-m03:/home/docker/cp-test.txt ha-825024-m02:/home/docker/cp-test_ha-825024-m03_ha-825024-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 ssh -n ha-825024-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 ssh -n ha-825024-m02 "sudo cat /home/docker/cp-test_ha-825024-m03_ha-825024-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 cp ha-825024-m03:/home/docker/cp-test.txt ha-825024-m04:/home/docker/cp-test_ha-825024-m03_ha-825024-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 ssh -n ha-825024-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 ssh -n ha-825024-m04 "sudo cat /home/docker/cp-test_ha-825024-m03_ha-825024-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 cp testdata/cp-test.txt ha-825024-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 ssh -n ha-825024-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 cp ha-825024-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1782517435/001/cp-test_ha-825024-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 ssh -n ha-825024-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 cp ha-825024-m04:/home/docker/cp-test.txt ha-825024:/home/docker/cp-test_ha-825024-m04_ha-825024.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 ssh -n ha-825024-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 ssh -n ha-825024 "sudo cat /home/docker/cp-test_ha-825024-m04_ha-825024.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 cp ha-825024-m04:/home/docker/cp-test.txt ha-825024-m02:/home/docker/cp-test_ha-825024-m04_ha-825024-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 ssh -n ha-825024-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 ssh -n ha-825024-m02 "sudo cat /home/docker/cp-test_ha-825024-m04_ha-825024-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 cp ha-825024-m04:/home/docker/cp-test.txt ha-825024-m03:/home/docker/cp-test_ha-825024-m04_ha-825024-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 ssh -n ha-825024-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 ssh -n ha-825024-m03 "sudo cat /home/docker/cp-test_ha-825024-m04_ha-825024-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (81.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 node stop m02 --alsologtostderr -v 5
E1006 14:08:25.525773  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/functional-561811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-825024 node stop m02 --alsologtostderr -v 5: (1m20.381776978s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-825024 status --alsologtostderr -v 5: exit status 7 (708.752761ms)

                                                
                                                
-- stdout --
	ha-825024
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-825024-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-825024-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-825024-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 14:09:19.220341  757833 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:09:19.220476  757833 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:09:19.220485  757833 out.go:374] Setting ErrFile to fd 2...
	I1006 14:09:19.220491  757833 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:09:19.220740  757833 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-739942/.minikube/bin
	I1006 14:09:19.220961  757833 out.go:368] Setting JSON to false
	I1006 14:09:19.221026  757833 mustload.go:65] Loading cluster: ha-825024
	I1006 14:09:19.221145  757833 notify.go:220] Checking for updates...
	I1006 14:09:19.221414  757833 config.go:182] Loaded profile config "ha-825024": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:09:19.221429  757833 status.go:174] checking status of ha-825024 ...
	I1006 14:09:19.221876  757833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 14:09:19.221912  757833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 14:09:19.241333  757833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34711
	I1006 14:09:19.241971  757833 main.go:141] libmachine: () Calling .GetVersion
	I1006 14:09:19.242789  757833 main.go:141] libmachine: Using API Version  1
	I1006 14:09:19.242831  757833 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 14:09:19.243353  757833 main.go:141] libmachine: () Calling .GetMachineName
	I1006 14:09:19.243617  757833 main.go:141] libmachine: (ha-825024) Calling .GetState
	I1006 14:09:19.245987  757833 status.go:371] ha-825024 host status = "Running" (err=<nil>)
	I1006 14:09:19.246024  757833 host.go:66] Checking if "ha-825024" exists ...
	I1006 14:09:19.246468  757833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 14:09:19.246530  757833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 14:09:19.260637  757833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35467
	I1006 14:09:19.261278  757833 main.go:141] libmachine: () Calling .GetVersion
	I1006 14:09:19.261911  757833 main.go:141] libmachine: Using API Version  1
	I1006 14:09:19.261934  757833 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 14:09:19.262393  757833 main.go:141] libmachine: () Calling .GetMachineName
	I1006 14:09:19.262650  757833 main.go:141] libmachine: (ha-825024) Calling .GetIP
	I1006 14:09:19.266284  757833 main.go:141] libmachine: (ha-825024) DBG | domain ha-825024 has defined MAC address 52:54:00:9e:fd:dc in network mk-ha-825024
	I1006 14:09:19.266934  757833 main.go:141] libmachine: (ha-825024) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fd:dc", ip: ""} in network mk-ha-825024: {Iface:virbr1 ExpiryTime:2025-10-06 15:03:22 +0000 UTC Type:0 Mac:52:54:00:9e:fd:dc Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-825024 Clientid:01:52:54:00:9e:fd:dc}
	I1006 14:09:19.266976  757833 main.go:141] libmachine: (ha-825024) DBG | domain ha-825024 has defined IP address 192.168.39.57 and MAC address 52:54:00:9e:fd:dc in network mk-ha-825024
	I1006 14:09:19.267146  757833 host.go:66] Checking if "ha-825024" exists ...
	I1006 14:09:19.267548  757833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 14:09:19.267607  757833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 14:09:19.283572  757833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45477
	I1006 14:09:19.284074  757833 main.go:141] libmachine: () Calling .GetVersion
	I1006 14:09:19.284576  757833 main.go:141] libmachine: Using API Version  1
	I1006 14:09:19.284620  757833 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 14:09:19.284983  757833 main.go:141] libmachine: () Calling .GetMachineName
	I1006 14:09:19.285268  757833 main.go:141] libmachine: (ha-825024) Calling .DriverName
	I1006 14:09:19.285504  757833 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 14:09:19.285548  757833 main.go:141] libmachine: (ha-825024) Calling .GetSSHHostname
	I1006 14:09:19.288957  757833 main.go:141] libmachine: (ha-825024) DBG | domain ha-825024 has defined MAC address 52:54:00:9e:fd:dc in network mk-ha-825024
	I1006 14:09:19.289624  757833 main.go:141] libmachine: (ha-825024) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fd:dc", ip: ""} in network mk-ha-825024: {Iface:virbr1 ExpiryTime:2025-10-06 15:03:22 +0000 UTC Type:0 Mac:52:54:00:9e:fd:dc Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-825024 Clientid:01:52:54:00:9e:fd:dc}
	I1006 14:09:19.289660  757833 main.go:141] libmachine: (ha-825024) DBG | domain ha-825024 has defined IP address 192.168.39.57 and MAC address 52:54:00:9e:fd:dc in network mk-ha-825024
	I1006 14:09:19.289871  757833 main.go:141] libmachine: (ha-825024) Calling .GetSSHPort
	I1006 14:09:19.290054  757833 main.go:141] libmachine: (ha-825024) Calling .GetSSHKeyPath
	I1006 14:09:19.290304  757833 main.go:141] libmachine: (ha-825024) Calling .GetSSHUsername
	I1006 14:09:19.290461  757833 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/ha-825024/id_rsa Username:docker}
	I1006 14:09:19.373223  757833 ssh_runner.go:195] Run: systemctl --version
	I1006 14:09:19.382265  757833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 14:09:19.403352  757833 kubeconfig.go:125] found "ha-825024" server: "https://192.168.39.254:8443"
	I1006 14:09:19.403393  757833 api_server.go:166] Checking apiserver status ...
	I1006 14:09:19.403448  757833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:09:19.427479  757833 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1419/cgroup
	W1006 14:09:19.444631  757833 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1419/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:09:19.444699  757833 ssh_runner.go:195] Run: ls
	I1006 14:09:19.452259  757833 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1006 14:09:19.458295  757833 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1006 14:09:19.458319  757833 status.go:463] ha-825024 apiserver status = Running (err=<nil>)
	I1006 14:09:19.458330  757833 status.go:176] ha-825024 status: &{Name:ha-825024 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1006 14:09:19.458348  757833 status.go:174] checking status of ha-825024-m02 ...
	I1006 14:09:19.458663  757833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 14:09:19.458709  757833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 14:09:19.473382  757833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40131
	I1006 14:09:19.473945  757833 main.go:141] libmachine: () Calling .GetVersion
	I1006 14:09:19.474482  757833 main.go:141] libmachine: Using API Version  1
	I1006 14:09:19.474506  757833 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 14:09:19.474931  757833 main.go:141] libmachine: () Calling .GetMachineName
	I1006 14:09:19.475124  757833 main.go:141] libmachine: (ha-825024-m02) Calling .GetState
	I1006 14:09:19.477163  757833 status.go:371] ha-825024-m02 host status = "Stopped" (err=<nil>)
	I1006 14:09:19.477184  757833 status.go:384] host is not running, skipping remaining checks
	I1006 14:09:19.477190  757833 status.go:176] ha-825024-m02 status: &{Name:ha-825024-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1006 14:09:19.477214  757833 status.go:174] checking status of ha-825024-m03 ...
	I1006 14:09:19.477674  757833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 14:09:19.477733  757833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 14:09:19.492302  757833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45395
	I1006 14:09:19.492914  757833 main.go:141] libmachine: () Calling .GetVersion
	I1006 14:09:19.493565  757833 main.go:141] libmachine: Using API Version  1
	I1006 14:09:19.493604  757833 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 14:09:19.493976  757833 main.go:141] libmachine: () Calling .GetMachineName
	I1006 14:09:19.494226  757833 main.go:141] libmachine: (ha-825024-m03) Calling .GetState
	I1006 14:09:19.496266  757833 status.go:371] ha-825024-m03 host status = "Running" (err=<nil>)
	I1006 14:09:19.496286  757833 host.go:66] Checking if "ha-825024-m03" exists ...
	I1006 14:09:19.496627  757833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 14:09:19.496679  757833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 14:09:19.511734  757833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37701
	I1006 14:09:19.512280  757833 main.go:141] libmachine: () Calling .GetVersion
	I1006 14:09:19.512886  757833 main.go:141] libmachine: Using API Version  1
	I1006 14:09:19.512915  757833 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 14:09:19.513356  757833 main.go:141] libmachine: () Calling .GetMachineName
	I1006 14:09:19.513602  757833 main.go:141] libmachine: (ha-825024-m03) Calling .GetIP
	I1006 14:09:19.517655  757833 main.go:141] libmachine: (ha-825024-m03) DBG | domain ha-825024-m03 has defined MAC address 52:54:00:e2:51:bd in network mk-ha-825024
	I1006 14:09:19.518188  757833 main.go:141] libmachine: (ha-825024-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:51:bd", ip: ""} in network mk-ha-825024: {Iface:virbr1 ExpiryTime:2025-10-06 15:05:32 +0000 UTC Type:0 Mac:52:54:00:e2:51:bd Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:ha-825024-m03 Clientid:01:52:54:00:e2:51:bd}
	I1006 14:09:19.518211  757833 main.go:141] libmachine: (ha-825024-m03) DBG | domain ha-825024-m03 has defined IP address 192.168.39.46 and MAC address 52:54:00:e2:51:bd in network mk-ha-825024
	I1006 14:09:19.518430  757833 host.go:66] Checking if "ha-825024-m03" exists ...
	I1006 14:09:19.518951  757833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 14:09:19.519001  757833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 14:09:19.533640  757833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35847
	I1006 14:09:19.534154  757833 main.go:141] libmachine: () Calling .GetVersion
	I1006 14:09:19.534771  757833 main.go:141] libmachine: Using API Version  1
	I1006 14:09:19.534803  757833 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 14:09:19.535294  757833 main.go:141] libmachine: () Calling .GetMachineName
	I1006 14:09:19.535553  757833 main.go:141] libmachine: (ha-825024-m03) Calling .DriverName
	I1006 14:09:19.535803  757833 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 14:09:19.535830  757833 main.go:141] libmachine: (ha-825024-m03) Calling .GetSSHHostname
	I1006 14:09:19.539162  757833 main.go:141] libmachine: (ha-825024-m03) DBG | domain ha-825024-m03 has defined MAC address 52:54:00:e2:51:bd in network mk-ha-825024
	I1006 14:09:19.539744  757833 main.go:141] libmachine: (ha-825024-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:51:bd", ip: ""} in network mk-ha-825024: {Iface:virbr1 ExpiryTime:2025-10-06 15:05:32 +0000 UTC Type:0 Mac:52:54:00:e2:51:bd Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:ha-825024-m03 Clientid:01:52:54:00:e2:51:bd}
	I1006 14:09:19.539779  757833 main.go:141] libmachine: (ha-825024-m03) DBG | domain ha-825024-m03 has defined IP address 192.168.39.46 and MAC address 52:54:00:e2:51:bd in network mk-ha-825024
	I1006 14:09:19.540015  757833 main.go:141] libmachine: (ha-825024-m03) Calling .GetSSHPort
	I1006 14:09:19.540260  757833 main.go:141] libmachine: (ha-825024-m03) Calling .GetSSHKeyPath
	I1006 14:09:19.540412  757833 main.go:141] libmachine: (ha-825024-m03) Calling .GetSSHUsername
	I1006 14:09:19.540562  757833 sshutil.go:53] new ssh client: &{IP:192.168.39.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/ha-825024-m03/id_rsa Username:docker}
	I1006 14:09:19.631103  757833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 14:09:19.654138  757833 kubeconfig.go:125] found "ha-825024" server: "https://192.168.39.254:8443"
	I1006 14:09:19.654168  757833 api_server.go:166] Checking apiserver status ...
	I1006 14:09:19.654206  757833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:09:19.676089  757833 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1800/cgroup
	W1006 14:09:19.689340  757833 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1800/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:09:19.689400  757833 ssh_runner.go:195] Run: ls
	I1006 14:09:19.695313  757833 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1006 14:09:19.700991  757833 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1006 14:09:19.701040  757833 status.go:463] ha-825024-m03 apiserver status = Running (err=<nil>)
	I1006 14:09:19.701052  757833 status.go:176] ha-825024-m03 status: &{Name:ha-825024-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1006 14:09:19.701074  757833 status.go:174] checking status of ha-825024-m04 ...
	I1006 14:09:19.701400  757833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 14:09:19.701475  757833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 14:09:19.715326  757833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37471
	I1006 14:09:19.715824  757833 main.go:141] libmachine: () Calling .GetVersion
	I1006 14:09:19.716337  757833 main.go:141] libmachine: Using API Version  1
	I1006 14:09:19.716367  757833 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 14:09:19.716737  757833 main.go:141] libmachine: () Calling .GetMachineName
	I1006 14:09:19.716938  757833 main.go:141] libmachine: (ha-825024-m04) Calling .GetState
	I1006 14:09:19.718768  757833 status.go:371] ha-825024-m04 host status = "Running" (err=<nil>)
	I1006 14:09:19.718816  757833 host.go:66] Checking if "ha-825024-m04" exists ...
	I1006 14:09:19.719133  757833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 14:09:19.719198  757833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 14:09:19.732888  757833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40939
	I1006 14:09:19.733340  757833 main.go:141] libmachine: () Calling .GetVersion
	I1006 14:09:19.734084  757833 main.go:141] libmachine: Using API Version  1
	I1006 14:09:19.734112  757833 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 14:09:19.734500  757833 main.go:141] libmachine: () Calling .GetMachineName
	I1006 14:09:19.734771  757833 main.go:141] libmachine: (ha-825024-m04) Calling .GetIP
	I1006 14:09:19.737693  757833 main.go:141] libmachine: (ha-825024-m04) DBG | domain ha-825024-m04 has defined MAC address 52:54:00:53:67:18 in network mk-ha-825024
	I1006 14:09:19.738136  757833 main.go:141] libmachine: (ha-825024-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:67:18", ip: ""} in network mk-ha-825024: {Iface:virbr1 ExpiryTime:2025-10-06 15:07:16 +0000 UTC Type:0 Mac:52:54:00:53:67:18 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-825024-m04 Clientid:01:52:54:00:53:67:18}
	I1006 14:09:19.738183  757833 main.go:141] libmachine: (ha-825024-m04) DBG | domain ha-825024-m04 has defined IP address 192.168.39.44 and MAC address 52:54:00:53:67:18 in network mk-ha-825024
	I1006 14:09:19.738333  757833 host.go:66] Checking if "ha-825024-m04" exists ...
	I1006 14:09:19.738653  757833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 14:09:19.738699  757833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 14:09:19.752384  757833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35871
	I1006 14:09:19.752865  757833 main.go:141] libmachine: () Calling .GetVersion
	I1006 14:09:19.753388  757833 main.go:141] libmachine: Using API Version  1
	I1006 14:09:19.753416  757833 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 14:09:19.753811  757833 main.go:141] libmachine: () Calling .GetMachineName
	I1006 14:09:19.754028  757833 main.go:141] libmachine: (ha-825024-m04) Calling .DriverName
	I1006 14:09:19.754225  757833 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 14:09:19.754247  757833 main.go:141] libmachine: (ha-825024-m04) Calling .GetSSHHostname
	I1006 14:09:19.757312  757833 main.go:141] libmachine: (ha-825024-m04) DBG | domain ha-825024-m04 has defined MAC address 52:54:00:53:67:18 in network mk-ha-825024
	I1006 14:09:19.757790  757833 main.go:141] libmachine: (ha-825024-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:67:18", ip: ""} in network mk-ha-825024: {Iface:virbr1 ExpiryTime:2025-10-06 15:07:16 +0000 UTC Type:0 Mac:52:54:00:53:67:18 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-825024-m04 Clientid:01:52:54:00:53:67:18}
	I1006 14:09:19.757822  757833 main.go:141] libmachine: (ha-825024-m04) DBG | domain ha-825024-m04 has defined IP address 192.168.39.44 and MAC address 52:54:00:53:67:18 in network mk-ha-825024
	I1006 14:09:19.757981  757833 main.go:141] libmachine: (ha-825024-m04) Calling .GetSSHPort
	I1006 14:09:19.758143  757833 main.go:141] libmachine: (ha-825024-m04) Calling .GetSSHKeyPath
	I1006 14:09:19.758336  757833 main.go:141] libmachine: (ha-825024-m04) Calling .GetSSHUsername
	I1006 14:09:19.758472  757833 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/ha-825024-m04/id_rsa Username:docker}
	I1006 14:09:19.851415  757833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 14:09:19.873031  757833 status.go:176] ha-825024-m04 status: &{Name:ha-825024-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (81.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (38.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 node start m02 --alsologtostderr -v 5
E1006 14:09:47.448165  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/functional-561811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-825024 node start m02 --alsologtostderr -v 5: (37.491017816s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-825024 status --alsologtostderr -v 5: (1.155783634s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (38.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.083214228s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (304.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 stop --alsologtostderr -v 5
E1006 14:12:03.589309  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/functional-561811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:12:31.291173  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/functional-561811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:12:48.927789  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/addons-395535/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-825024 stop --alsologtostderr -v 5: (3m1.03362847s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 start --wait true --alsologtostderr -v 5
E1006 14:14:12.003796  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/addons-395535/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-825024 start --wait true --alsologtostderr -v 5: (2m3.617763099s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (304.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-825024 node delete m03 --alsologtostderr -v 5: (17.769970754s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (256.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 stop --alsologtostderr -v 5
E1006 14:17:03.587037  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/functional-561811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:17:48.927687  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/addons-395535/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-825024 stop --alsologtostderr -v 5: (4m15.922301858s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-825024 status --alsologtostderr -v 5: exit status 7 (116.868731ms)

                                                
                                                
-- stdout --
	ha-825024
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-825024-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-825024-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 14:19:40.436155  761818 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:19:40.436459  761818 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:19:40.436470  761818 out.go:374] Setting ErrFile to fd 2...
	I1006 14:19:40.436475  761818 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:19:40.436701  761818 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-739942/.minikube/bin
	I1006 14:19:40.436897  761818 out.go:368] Setting JSON to false
	I1006 14:19:40.436932  761818 mustload.go:65] Loading cluster: ha-825024
	I1006 14:19:40.437026  761818 notify.go:220] Checking for updates...
	I1006 14:19:40.437339  761818 config.go:182] Loaded profile config "ha-825024": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:19:40.437355  761818 status.go:174] checking status of ha-825024 ...
	I1006 14:19:40.437833  761818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 14:19:40.437875  761818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 14:19:40.458595  761818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36089
	I1006 14:19:40.459327  761818 main.go:141] libmachine: () Calling .GetVersion
	I1006 14:19:40.460106  761818 main.go:141] libmachine: Using API Version  1
	I1006 14:19:40.460142  761818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 14:19:40.460610  761818 main.go:141] libmachine: () Calling .GetMachineName
	I1006 14:19:40.460859  761818 main.go:141] libmachine: (ha-825024) Calling .GetState
	I1006 14:19:40.462953  761818 status.go:371] ha-825024 host status = "Stopped" (err=<nil>)
	I1006 14:19:40.462975  761818 status.go:384] host is not running, skipping remaining checks
	I1006 14:19:40.462983  761818 status.go:176] ha-825024 status: &{Name:ha-825024 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1006 14:19:40.463019  761818 status.go:174] checking status of ha-825024-m02 ...
	I1006 14:19:40.463481  761818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 14:19:40.463545  761818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 14:19:40.477562  761818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34371
	I1006 14:19:40.478167  761818 main.go:141] libmachine: () Calling .GetVersion
	I1006 14:19:40.478699  761818 main.go:141] libmachine: Using API Version  1
	I1006 14:19:40.478721  761818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 14:19:40.479133  761818 main.go:141] libmachine: () Calling .GetMachineName
	I1006 14:19:40.479360  761818 main.go:141] libmachine: (ha-825024-m02) Calling .GetState
	I1006 14:19:40.481312  761818 status.go:371] ha-825024-m02 host status = "Stopped" (err=<nil>)
	I1006 14:19:40.481333  761818 status.go:384] host is not running, skipping remaining checks
	I1006 14:19:40.481341  761818 status.go:176] ha-825024-m02 status: &{Name:ha-825024-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1006 14:19:40.481373  761818 status.go:174] checking status of ha-825024-m04 ...
	I1006 14:19:40.481690  761818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 14:19:40.481731  761818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 14:19:40.496481  761818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40751
	I1006 14:19:40.496964  761818 main.go:141] libmachine: () Calling .GetVersion
	I1006 14:19:40.497452  761818 main.go:141] libmachine: Using API Version  1
	I1006 14:19:40.497474  761818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 14:19:40.497860  761818 main.go:141] libmachine: () Calling .GetMachineName
	I1006 14:19:40.498101  761818 main.go:141] libmachine: (ha-825024-m04) Calling .GetState
	I1006 14:19:40.499992  761818 status.go:371] ha-825024-m04 host status = "Stopped" (err=<nil>)
	I1006 14:19:40.500018  761818 status.go:384] host is not running, skipping remaining checks
	I1006 14:19:40.500025  761818 status.go:176] ha-825024-m04 status: &{Name:ha-825024-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (256.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (103.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-825024 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m42.464241423s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (103.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (90.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 node add --control-plane --alsologtostderr -v 5
E1006 14:22:03.590152  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/functional-561811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:22:48.922854  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/addons-395535/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-825024 node add --control-plane --alsologtostderr -v 5: (1m29.321797804s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-825024 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (90.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.93s)

                                                
                                    
x
+
TestJSONOutput/start/Command (59.66s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-739603 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1006 14:23:26.655041  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/functional-561811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-739603 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (59.659210254s)
--- PASS: TestJSONOutput/start/Command (59.66s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.8s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-739603 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.80s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.71s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-739603 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.71s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.22s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-739603 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-739603 --output=json --user=testUser: (7.22169673s)
--- PASS: TestJSONOutput/stop/Command (7.22s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-417299 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-417299 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (74.773284ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1d641b5a-3885-40e3-b048-a1083cea0e15","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-417299] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"286243f2-fd10-4ae0-a769-77b0de86cc77","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21701"}}
	{"specversion":"1.0","id":"49c2dff1-6243-4731-b095-5e255f75933a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1bd3d980-ba61-4406-af48-806db983abec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21701-739942/kubeconfig"}}
	{"specversion":"1.0","id":"313a0e00-b764-43eb-9716-5351ccd1a472","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-739942/.minikube"}}
	{"specversion":"1.0","id":"f703f9a1-8972-423b-b092-dceed60cd250","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"2051723d-e80d-4965-b447-1bb1d1b7de71","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"678a7f0d-30d4-475f-ac0f-842543c278cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-417299" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-417299
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (82.59s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-087821 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-087821 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (39.574268117s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-102367 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-102367 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (40.35901722s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-087821
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-102367
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-102367" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-102367
helpers_test.go:175: Cleaning up "first-087821" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-087821
--- PASS: TestMinikubeProfile (82.59s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (20.63s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-681106 --memory=3072 --mount-string /tmp/TestMountStartserial3311269045/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-681106 --memory=3072 --mount-string /tmp/TestMountStartserial3311269045/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (19.627796902s)
--- PASS: TestMountStart/serial/StartWithMountFirst (20.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-681106 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-681106 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (23.53s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-697739 --memory=3072 --mount-string /tmp/TestMountStartserial3311269045/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-697739 --memory=3072 --mount-string /tmp/TestMountStartserial3311269045/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (22.52860913s)
--- PASS: TestMountStart/serial/StartWithMountSecond (23.53s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-697739 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-697739 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.59s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-681106 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-697739 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-697739 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-697739
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-697739: (1.235407515s)
--- PASS: TestMountStart/serial/Stop (1.24s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (19.61s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-697739
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-697739: (18.608084479s)
--- PASS: TestMountStart/serial/RestartStopped (19.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-697739 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-697739 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (99.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-962847 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1006 14:27:03.586891  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/functional-561811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:27:48.923500  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/addons-395535/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-962847 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m39.484100714s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962847 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (99.94s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-962847 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-962847 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-962847 -- rollout status deployment/busybox: (3.63038544s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-962847 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-962847 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-962847 -- exec busybox-7b57f96db7-4jplh -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-962847 -- exec busybox-7b57f96db7-lqmgj -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-962847 -- exec busybox-7b57f96db7-4jplh -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-962847 -- exec busybox-7b57f96db7-lqmgj -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-962847 -- exec busybox-7b57f96db7-4jplh -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-962847 -- exec busybox-7b57f96db7-lqmgj -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.24s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-962847 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-962847 -- exec busybox-7b57f96db7-4jplh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-962847 -- exec busybox-7b57f96db7-4jplh -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-962847 -- exec busybox-7b57f96db7-lqmgj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-962847 -- exec busybox-7b57f96db7-lqmgj -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.83s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (43.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-962847 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-962847 -v=5 --alsologtostderr: (42.738871697s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962847 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (43.37s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-962847 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.62s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962847 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962847 cp testdata/cp-test.txt multinode-962847:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962847 ssh -n multinode-962847 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962847 cp multinode-962847:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1775729096/001/cp-test_multinode-962847.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962847 ssh -n multinode-962847 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962847 cp multinode-962847:/home/docker/cp-test.txt multinode-962847-m02:/home/docker/cp-test_multinode-962847_multinode-962847-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962847 ssh -n multinode-962847 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962847 ssh -n multinode-962847-m02 "sudo cat /home/docker/cp-test_multinode-962847_multinode-962847-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962847 cp multinode-962847:/home/docker/cp-test.txt multinode-962847-m03:/home/docker/cp-test_multinode-962847_multinode-962847-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962847 ssh -n multinode-962847 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962847 ssh -n multinode-962847-m03 "sudo cat /home/docker/cp-test_multinode-962847_multinode-962847-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962847 cp testdata/cp-test.txt multinode-962847-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962847 ssh -n multinode-962847-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962847 cp multinode-962847-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1775729096/001/cp-test_multinode-962847-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962847 ssh -n multinode-962847-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962847 cp multinode-962847-m02:/home/docker/cp-test.txt multinode-962847:/home/docker/cp-test_multinode-962847-m02_multinode-962847.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962847 ssh -n multinode-962847-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962847 ssh -n multinode-962847 "sudo cat /home/docker/cp-test_multinode-962847-m02_multinode-962847.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962847 cp multinode-962847-m02:/home/docker/cp-test.txt multinode-962847-m03:/home/docker/cp-test_multinode-962847-m02_multinode-962847-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962847 ssh -n multinode-962847-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962847 ssh -n multinode-962847-m03 "sudo cat /home/docker/cp-test_multinode-962847-m02_multinode-962847-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962847 cp testdata/cp-test.txt multinode-962847-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962847 ssh -n multinode-962847-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962847 cp multinode-962847-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1775729096/001/cp-test_multinode-962847-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962847 ssh -n multinode-962847-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962847 cp multinode-962847-m03:/home/docker/cp-test.txt multinode-962847:/home/docker/cp-test_multinode-962847-m03_multinode-962847.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962847 ssh -n multinode-962847-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962847 ssh -n multinode-962847 "sudo cat /home/docker/cp-test_multinode-962847-m03_multinode-962847.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962847 cp multinode-962847-m03:/home/docker/cp-test.txt multinode-962847-m02:/home/docker/cp-test_multinode-962847-m03_multinode-962847-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962847 ssh -n multinode-962847-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962847 ssh -n multinode-962847-m02 "sudo cat /home/docker/cp-test_multinode-962847-m03_multinode-962847-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.80s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962847 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-962847 node stop m03: (1.690863573s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962847 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-962847 status: exit status 7 (466.7818ms)

                                                
                                                
-- stdout --
	multinode-962847
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-962847-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-962847-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962847 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-962847 status --alsologtostderr: exit status 7 (460.817574ms)

                                                
                                                
-- stdout --
	multinode-962847
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-962847-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-962847-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 14:29:18.361413  769174 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:29:18.361664  769174 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:29:18.361673  769174 out.go:374] Setting ErrFile to fd 2...
	I1006 14:29:18.361677  769174 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:29:18.361894  769174 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-739942/.minikube/bin
	I1006 14:29:18.362076  769174 out.go:368] Setting JSON to false
	I1006 14:29:18.362106  769174 mustload.go:65] Loading cluster: multinode-962847
	I1006 14:29:18.362294  769174 notify.go:220] Checking for updates...
	I1006 14:29:18.362505  769174 config.go:182] Loaded profile config "multinode-962847": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:29:18.362520  769174 status.go:174] checking status of multinode-962847 ...
	I1006 14:29:18.363044  769174 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 14:29:18.363082  769174 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 14:29:18.381708  769174 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34327
	I1006 14:29:18.382217  769174 main.go:141] libmachine: () Calling .GetVersion
	I1006 14:29:18.382917  769174 main.go:141] libmachine: Using API Version  1
	I1006 14:29:18.382958  769174 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 14:29:18.383330  769174 main.go:141] libmachine: () Calling .GetMachineName
	I1006 14:29:18.383576  769174 main.go:141] libmachine: (multinode-962847) Calling .GetState
	I1006 14:29:18.385662  769174 status.go:371] multinode-962847 host status = "Running" (err=<nil>)
	I1006 14:29:18.385685  769174 host.go:66] Checking if "multinode-962847" exists ...
	I1006 14:29:18.386069  769174 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 14:29:18.386116  769174 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 14:29:18.400723  769174 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41633
	I1006 14:29:18.401192  769174 main.go:141] libmachine: () Calling .GetVersion
	I1006 14:29:18.401681  769174 main.go:141] libmachine: Using API Version  1
	I1006 14:29:18.401707  769174 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 14:29:18.402137  769174 main.go:141] libmachine: () Calling .GetMachineName
	I1006 14:29:18.402402  769174 main.go:141] libmachine: (multinode-962847) Calling .GetIP
	I1006 14:29:18.405723  769174 main.go:141] libmachine: (multinode-962847) DBG | domain multinode-962847 has defined MAC address 52:54:00:93:16:45 in network mk-multinode-962847
	I1006 14:29:18.406360  769174 main.go:141] libmachine: (multinode-962847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:16:45", ip: ""} in network mk-multinode-962847: {Iface:virbr1 ExpiryTime:2025-10-06 15:26:53 +0000 UTC Type:0 Mac:52:54:00:93:16:45 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:multinode-962847 Clientid:01:52:54:00:93:16:45}
	I1006 14:29:18.406386  769174 main.go:141] libmachine: (multinode-962847) DBG | domain multinode-962847 has defined IP address 192.168.39.242 and MAC address 52:54:00:93:16:45 in network mk-multinode-962847
	I1006 14:29:18.406562  769174 host.go:66] Checking if "multinode-962847" exists ...
	I1006 14:29:18.406878  769174 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 14:29:18.406928  769174 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 14:29:18.421113  769174 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35191
	I1006 14:29:18.421631  769174 main.go:141] libmachine: () Calling .GetVersion
	I1006 14:29:18.422142  769174 main.go:141] libmachine: Using API Version  1
	I1006 14:29:18.422166  769174 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 14:29:18.422543  769174 main.go:141] libmachine: () Calling .GetMachineName
	I1006 14:29:18.422798  769174 main.go:141] libmachine: (multinode-962847) Calling .DriverName
	I1006 14:29:18.423022  769174 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 14:29:18.423068  769174 main.go:141] libmachine: (multinode-962847) Calling .GetSSHHostname
	I1006 14:29:18.426506  769174 main.go:141] libmachine: (multinode-962847) DBG | domain multinode-962847 has defined MAC address 52:54:00:93:16:45 in network mk-multinode-962847
	I1006 14:29:18.426973  769174 main.go:141] libmachine: (multinode-962847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:16:45", ip: ""} in network mk-multinode-962847: {Iface:virbr1 ExpiryTime:2025-10-06 15:26:53 +0000 UTC Type:0 Mac:52:54:00:93:16:45 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:multinode-962847 Clientid:01:52:54:00:93:16:45}
	I1006 14:29:18.427021  769174 main.go:141] libmachine: (multinode-962847) DBG | domain multinode-962847 has defined IP address 192.168.39.242 and MAC address 52:54:00:93:16:45 in network mk-multinode-962847
	I1006 14:29:18.427130  769174 main.go:141] libmachine: (multinode-962847) Calling .GetSSHPort
	I1006 14:29:18.427295  769174 main.go:141] libmachine: (multinode-962847) Calling .GetSSHKeyPath
	I1006 14:29:18.427499  769174 main.go:141] libmachine: (multinode-962847) Calling .GetSSHUsername
	I1006 14:29:18.427641  769174 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/multinode-962847/id_rsa Username:docker}
	I1006 14:29:18.514346  769174 ssh_runner.go:195] Run: systemctl --version
	I1006 14:29:18.522972  769174 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 14:29:18.543491  769174 kubeconfig.go:125] found "multinode-962847" server: "https://192.168.39.242:8443"
	I1006 14:29:18.543538  769174 api_server.go:166] Checking apiserver status ...
	I1006 14:29:18.543617  769174 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:18.564996  769174 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1387/cgroup
	W1006 14:29:18.577451  769174 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1387/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:29:18.577523  769174 ssh_runner.go:195] Run: ls
	I1006 14:29:18.582864  769174 api_server.go:253] Checking apiserver healthz at https://192.168.39.242:8443/healthz ...
	I1006 14:29:18.587977  769174 api_server.go:279] https://192.168.39.242:8443/healthz returned 200:
	ok
	I1006 14:29:18.588012  769174 status.go:463] multinode-962847 apiserver status = Running (err=<nil>)
	I1006 14:29:18.588028  769174 status.go:176] multinode-962847 status: &{Name:multinode-962847 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1006 14:29:18.588051  769174 status.go:174] checking status of multinode-962847-m02 ...
	I1006 14:29:18.588474  769174 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 14:29:18.588537  769174 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 14:29:18.602868  769174 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43939
	I1006 14:29:18.603524  769174 main.go:141] libmachine: () Calling .GetVersion
	I1006 14:29:18.604057  769174 main.go:141] libmachine: Using API Version  1
	I1006 14:29:18.604078  769174 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 14:29:18.604422  769174 main.go:141] libmachine: () Calling .GetMachineName
	I1006 14:29:18.604664  769174 main.go:141] libmachine: (multinode-962847-m02) Calling .GetState
	I1006 14:29:18.606311  769174 status.go:371] multinode-962847-m02 host status = "Running" (err=<nil>)
	I1006 14:29:18.606332  769174 host.go:66] Checking if "multinode-962847-m02" exists ...
	I1006 14:29:18.606669  769174 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 14:29:18.606730  769174 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 14:29:18.620766  769174 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41349
	I1006 14:29:18.621251  769174 main.go:141] libmachine: () Calling .GetVersion
	I1006 14:29:18.621764  769174 main.go:141] libmachine: Using API Version  1
	I1006 14:29:18.621786  769174 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 14:29:18.622144  769174 main.go:141] libmachine: () Calling .GetMachineName
	I1006 14:29:18.622417  769174 main.go:141] libmachine: (multinode-962847-m02) Calling .GetIP
	I1006 14:29:18.626254  769174 main.go:141] libmachine: (multinode-962847-m02) DBG | domain multinode-962847-m02 has defined MAC address 52:54:00:bd:34:e2 in network mk-multinode-962847
	I1006 14:29:18.626800  769174 main.go:141] libmachine: (multinode-962847-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:34:e2", ip: ""} in network mk-multinode-962847: {Iface:virbr1 ExpiryTime:2025-10-06 15:27:49 +0000 UTC Type:0 Mac:52:54:00:bd:34:e2 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:multinode-962847-m02 Clientid:01:52:54:00:bd:34:e2}
	I1006 14:29:18.626833  769174 main.go:141] libmachine: (multinode-962847-m02) DBG | domain multinode-962847-m02 has defined IP address 192.168.39.123 and MAC address 52:54:00:bd:34:e2 in network mk-multinode-962847
	I1006 14:29:18.627028  769174 host.go:66] Checking if "multinode-962847-m02" exists ...
	I1006 14:29:18.627379  769174 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 14:29:18.627429  769174 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 14:29:18.641849  769174 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44059
	I1006 14:29:18.642356  769174 main.go:141] libmachine: () Calling .GetVersion
	I1006 14:29:18.642877  769174 main.go:141] libmachine: Using API Version  1
	I1006 14:29:18.642915  769174 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 14:29:18.643442  769174 main.go:141] libmachine: () Calling .GetMachineName
	I1006 14:29:18.643690  769174 main.go:141] libmachine: (multinode-962847-m02) Calling .DriverName
	I1006 14:29:18.643934  769174 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 14:29:18.643958  769174 main.go:141] libmachine: (multinode-962847-m02) Calling .GetSSHHostname
	I1006 14:29:18.647818  769174 main.go:141] libmachine: (multinode-962847-m02) DBG | domain multinode-962847-m02 has defined MAC address 52:54:00:bd:34:e2 in network mk-multinode-962847
	I1006 14:29:18.648318  769174 main.go:141] libmachine: (multinode-962847-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:34:e2", ip: ""} in network mk-multinode-962847: {Iface:virbr1 ExpiryTime:2025-10-06 15:27:49 +0000 UTC Type:0 Mac:52:54:00:bd:34:e2 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:multinode-962847-m02 Clientid:01:52:54:00:bd:34:e2}
	I1006 14:29:18.648376  769174 main.go:141] libmachine: (multinode-962847-m02) DBG | domain multinode-962847-m02 has defined IP address 192.168.39.123 and MAC address 52:54:00:bd:34:e2 in network mk-multinode-962847
	I1006 14:29:18.648569  769174 main.go:141] libmachine: (multinode-962847-m02) Calling .GetSSHPort
	I1006 14:29:18.648864  769174 main.go:141] libmachine: (multinode-962847-m02) Calling .GetSSHKeyPath
	I1006 14:29:18.649100  769174 main.go:141] libmachine: (multinode-962847-m02) Calling .GetSSHUsername
	I1006 14:29:18.649282  769174 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21701-739942/.minikube/machines/multinode-962847-m02/id_rsa Username:docker}
	I1006 14:29:18.734160  769174 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 14:29:18.751153  769174 status.go:176] multinode-962847-m02 status: &{Name:multinode-962847-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1006 14:29:18.751191  769174 status.go:174] checking status of multinode-962847-m03 ...
	I1006 14:29:18.751528  769174 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 14:29:18.751573  769174 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 14:29:18.766889  769174 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44935
	I1006 14:29:18.767410  769174 main.go:141] libmachine: () Calling .GetVersion
	I1006 14:29:18.767844  769174 main.go:141] libmachine: Using API Version  1
	I1006 14:29:18.767867  769174 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 14:29:18.768311  769174 main.go:141] libmachine: () Calling .GetMachineName
	I1006 14:29:18.768508  769174 main.go:141] libmachine: (multinode-962847-m03) Calling .GetState
	I1006 14:29:18.770483  769174 status.go:371] multinode-962847-m03 host status = "Stopped" (err=<nil>)
	I1006 14:29:18.770501  769174 status.go:384] host is not running, skipping remaining checks
	I1006 14:29:18.770509  769174 status.go:176] multinode-962847-m03 status: &{Name:multinode-962847-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.62s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (37.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962847 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-962847 node start m03 -v=5 --alsologtostderr: (36.536504766s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962847 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (37.20s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (289.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-962847
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-962847
E1006 14:30:52.007959  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/addons-395535/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:32:03.590908  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/functional-561811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-962847: (2m46.588709478s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-962847 --wait=true -v=5 --alsologtostderr
E1006 14:32:48.927731  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/addons-395535/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-962847 --wait=true -v=5 --alsologtostderr: (2m3.217681571s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-962847
--- PASS: TestMultiNode/serial/RestartKeepsNodes (289.92s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962847 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-962847 node delete m03: (2.231665579s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962847 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.80s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (162.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962847 stop
E1006 14:37:03.590776  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/functional-561811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-962847 stop: (2m41.872191203s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962847 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-962847 status: exit status 7 (96.925321ms)

                                                
                                                
-- stdout --
	multinode-962847
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-962847-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962847 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-962847 status --alsologtostderr: exit status 7 (97.650839ms)

                                                
                                                
-- stdout --
	multinode-962847
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-962847-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 14:37:30.717657  771763 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:37:30.717938  771763 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:37:30.717952  771763 out.go:374] Setting ErrFile to fd 2...
	I1006 14:37:30.717959  771763 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:37:30.718148  771763 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-739942/.minikube/bin
	I1006 14:37:30.718334  771763 out.go:368] Setting JSON to false
	I1006 14:37:30.718366  771763 mustload.go:65] Loading cluster: multinode-962847
	I1006 14:37:30.718506  771763 notify.go:220] Checking for updates...
	I1006 14:37:30.718868  771763 config.go:182] Loaded profile config "multinode-962847": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:37:30.718899  771763 status.go:174] checking status of multinode-962847 ...
	I1006 14:37:30.719500  771763 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 14:37:30.719541  771763 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 14:37:30.739979  771763 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45181
	I1006 14:37:30.740644  771763 main.go:141] libmachine: () Calling .GetVersion
	I1006 14:37:30.741326  771763 main.go:141] libmachine: Using API Version  1
	I1006 14:37:30.741354  771763 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 14:37:30.741782  771763 main.go:141] libmachine: () Calling .GetMachineName
	I1006 14:37:30.742021  771763 main.go:141] libmachine: (multinode-962847) Calling .GetState
	I1006 14:37:30.743909  771763 status.go:371] multinode-962847 host status = "Stopped" (err=<nil>)
	I1006 14:37:30.743932  771763 status.go:384] host is not running, skipping remaining checks
	I1006 14:37:30.743949  771763 status.go:176] multinode-962847 status: &{Name:multinode-962847 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1006 14:37:30.743991  771763 status.go:174] checking status of multinode-962847-m02 ...
	I1006 14:37:30.744445  771763 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1006 14:37:30.744494  771763 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 14:37:30.758523  771763 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40587
	I1006 14:37:30.759154  771763 main.go:141] libmachine: () Calling .GetVersion
	I1006 14:37:30.759707  771763 main.go:141] libmachine: Using API Version  1
	I1006 14:37:30.759734  771763 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 14:37:30.760103  771763 main.go:141] libmachine: () Calling .GetMachineName
	I1006 14:37:30.760296  771763 main.go:141] libmachine: (multinode-962847-m02) Calling .GetState
	I1006 14:37:30.762323  771763 status.go:371] multinode-962847-m02 host status = "Stopped" (err=<nil>)
	I1006 14:37:30.762339  771763 status.go:384] host is not running, skipping remaining checks
	I1006 14:37:30.762345  771763 status.go:176] multinode-962847-m02 status: &{Name:multinode-962847-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (162.07s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (86.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-962847 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1006 14:37:48.923910  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/addons-395535/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-962847 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m26.109064029s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962847 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (86.79s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (40.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-962847
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-962847-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-962847-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (73.61478ms)

                                                
                                                
-- stdout --
	* [multinode-962847-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21701
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21701-739942/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-739942/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-962847-m02' is duplicated with machine name 'multinode-962847-m02' in profile 'multinode-962847'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-962847-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-962847-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (39.168981286s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-962847
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-962847: exit status 80 (237.187332ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-962847 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-962847-m03 already exists in multinode-962847-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-962847-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (40.25s)

                                                
                                    
x
+
TestScheduledStopUnix (110.54s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-071538 --memory=3072 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1006 14:42:03.590642  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/functional-561811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-071538 --memory=3072 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (38.732436198s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-071538 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-071538 -n scheduled-stop-071538
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-071538 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1006 14:42:35.507929  743851 retry.go:31] will retry after 119.317µs: open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/scheduled-stop-071538/pid: no such file or directory
I1006 14:42:35.509096  743851 retry.go:31] will retry after 105.345µs: open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/scheduled-stop-071538/pid: no such file or directory
I1006 14:42:35.510196  743851 retry.go:31] will retry after 155.382µs: open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/scheduled-stop-071538/pid: no such file or directory
I1006 14:42:35.511360  743851 retry.go:31] will retry after 283.573µs: open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/scheduled-stop-071538/pid: no such file or directory
I1006 14:42:35.512482  743851 retry.go:31] will retry after 380.204µs: open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/scheduled-stop-071538/pid: no such file or directory
I1006 14:42:35.513628  743851 retry.go:31] will retry after 939.574µs: open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/scheduled-stop-071538/pid: no such file or directory
I1006 14:42:35.514755  743851 retry.go:31] will retry after 1.149831ms: open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/scheduled-stop-071538/pid: no such file or directory
I1006 14:42:35.516960  743851 retry.go:31] will retry after 1.416056ms: open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/scheduled-stop-071538/pid: no such file or directory
I1006 14:42:35.519234  743851 retry.go:31] will retry after 1.390869ms: open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/scheduled-stop-071538/pid: no such file or directory
I1006 14:42:35.521453  743851 retry.go:31] will retry after 2.07239ms: open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/scheduled-stop-071538/pid: no such file or directory
I1006 14:42:35.523622  743851 retry.go:31] will retry after 7.555806ms: open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/scheduled-stop-071538/pid: no such file or directory
I1006 14:42:35.531844  743851 retry.go:31] will retry after 12.67944ms: open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/scheduled-stop-071538/pid: no such file or directory
I1006 14:42:35.545116  743851 retry.go:31] will retry after 18.840178ms: open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/scheduled-stop-071538/pid: no such file or directory
I1006 14:42:35.564446  743851 retry.go:31] will retry after 14.29446ms: open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/scheduled-stop-071538/pid: no such file or directory
I1006 14:42:35.579726  743851 retry.go:31] will retry after 24.393548ms: open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/scheduled-stop-071538/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-071538 --cancel-scheduled
E1006 14:42:48.926921  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/addons-395535/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-071538 -n scheduled-stop-071538
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-071538
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-071538 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-071538
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-071538: exit status 7 (71.506557ms)

                                                
                                                
-- stdout --
	scheduled-stop-071538
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-071538 -n scheduled-stop-071538
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-071538 -n scheduled-stop-071538: exit status 7 (69.29006ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-071538" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-071538
--- PASS: TestScheduledStopUnix (110.54s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (154.68s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.144413565 start -p running-upgrade-455354 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.144413565 start -p running-upgrade-455354 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m42.409137791s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-455354 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-455354 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (50.840641521s)
helpers_test.go:175: Cleaning up "running-upgrade-455354" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-455354
--- PASS: TestRunningBinaryUpgrade (154.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-419392 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-419392 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (81.564148ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-419392] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21701
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21701-739942/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-739942/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (84.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-419392 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-419392 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m23.973675879s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-419392 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (84.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-702246 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-702246 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (138.530547ms)

                                                
                                                
-- stdout --
	* [false-702246] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21701
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21701-739942/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-739942/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 14:44:54.756081  776694 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:44:54.756433  776694 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:44:54.756445  776694 out.go:374] Setting ErrFile to fd 2...
	I1006 14:44:54.756452  776694 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:44:54.756774  776694 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-739942/.minikube/bin
	I1006 14:44:54.757429  776694 out.go:368] Setting JSON to false
	I1006 14:44:54.758755  776694 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":16046,"bootTime":1759745849,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 14:44:54.758836  776694 start.go:140] virtualization: kvm guest
	I1006 14:44:54.760969  776694 out.go:179] * [false-702246] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1006 14:44:54.762509  776694 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 14:44:54.762503  776694 notify.go:220] Checking for updates...
	I1006 14:44:54.765495  776694 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 14:44:54.766905  776694 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-739942/kubeconfig
	I1006 14:44:54.768472  776694 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-739942/.minikube
	I1006 14:44:54.769803  776694 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1006 14:44:54.771243  776694 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 14:44:54.773558  776694 config.go:182] Loaded profile config "NoKubernetes-419392": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:44:54.773727  776694 config.go:182] Loaded profile config "force-systemd-flag-640885": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:44:54.773843  776694 config.go:182] Loaded profile config "running-upgrade-455354": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1006 14:44:54.773947  776694 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 14:44:54.826436  776694 out.go:179] * Using the kvm2 driver based on user configuration
	I1006 14:44:54.827843  776694 start.go:304] selected driver: kvm2
	I1006 14:44:54.827865  776694 start.go:924] validating driver "kvm2" against <nil>
	I1006 14:44:54.827879  776694 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 14:44:54.830301  776694 out.go:203] 
	W1006 14:44:54.831755  776694 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1006 14:44:54.833125  776694 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-702246 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-702246

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-702246

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-702246

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-702246

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-702246

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-702246

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-702246

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-702246

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-702246

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-702246

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-702246"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-702246"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-702246"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-702246

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-702246"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-702246"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-702246" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-702246" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-702246" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-702246" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-702246" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-702246" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-702246" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-702246" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-702246"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-702246"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-702246"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-702246"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-702246"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-702246" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-702246" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-702246" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-702246"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-702246"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-702246"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-702246"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-702246"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-702246

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-702246"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-702246"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-702246"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-702246"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-702246"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-702246"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-702246"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-702246"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-702246"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-702246"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-702246"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-702246"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-702246"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-702246"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-702246"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-702246"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-702246"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-702246"

                                                
                                                
----------------------- debugLogs end: false-702246 [took: 3.769771512s] --------------------------------
helpers_test.go:175: Cleaning up "false-702246" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-702246
--- PASS: TestNetworkPlugins/group/false (4.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (49.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-419392 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-419392 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (48.853387046s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-419392 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-419392 status -o json: exit status 2 (243.891738ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-419392","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-419392
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (49.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (42.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-419392 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-419392 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (42.951783166s)
--- PASS: TestNoKubernetes/serial/Start (42.95s)

                                                
                                    
x
+
TestPause/serial/Start (78.41s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-670840 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-670840 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m18.409140256s)
--- PASS: TestPause/serial/Start (78.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-419392 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-419392 "sudo systemctl is-active --quiet service kubelet": exit status 1 (239.017174ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.61s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.61s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (81.91s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.3498745166 start -p stopped-upgrade-216364 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.3498745166 start -p stopped-upgrade-216364 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (43.569093434s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.3498745166 -p stopped-upgrade-216364 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.3498745166 -p stopped-upgrade-216364 stop: (1.644733251s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-216364 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-216364 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (36.691538888s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (81.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (62.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-702246 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-702246 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m2.098338585s)
--- PASS: TestNetworkPlugins/group/auto/Start (62.10s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.26s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-216364
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-216364: (1.257876961s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (63.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-702246 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-702246 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m3.470832983s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (63.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-702246 "pgrep -a kubelet"
I1006 14:49:51.400435  743851 config.go:182] Loaded profile config "auto-702246": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-702246 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-4r7wz" [b7938640-93e7-4ec7-aade-1a7f88d39317] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-4r7wz" [b7938640-93e7-4ec7-aade-1a7f88d39317] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.005142159s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-702246 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-702246 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-702246 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (81.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-702246 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-702246 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m21.395615657s)
--- PASS: TestNetworkPlugins/group/calico/Start (81.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (82.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-702246 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-702246 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m22.822668779s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (82.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-hfg56" [0ca998ad-df6e-4f2c-bc29-3889033c8641] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.046211764s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-702246 "pgrep -a kubelet"
I1006 14:50:57.345473  743851 config.go:182] Loaded profile config "kindnet-702246": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-702246 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context kindnet-702246 replace --force -f testdata/netcat-deployment.yaml: (1.097928797s)
I1006 14:50:58.487569  743851 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1006 14:50:58.495381  743851 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-qnz6q" [a6063194-c347-4468-8611-7eac0791902c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-qnz6q" [a6063194-c347-4468-8611-7eac0791902c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.006564017s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-702246 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-702246 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-702246 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-nq7qt" [8d13e065-573c-45af-8573-0aee4fd919e8] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-nq7qt" [8d13e065-573c-45af-8573-0aee4fd919e8] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.007403331s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (62.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-702246 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-702246 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m2.528422633s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (62.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-702246 "pgrep -a kubelet"
I1006 14:51:32.331063  743851 config.go:182] Loaded profile config "calico-702246": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-702246 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-nrszp" [8712d7ec-cf5d-49ec-bfcc-b3d2397343be] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-nrszp" [8712d7ec-cf5d-49ec-bfcc-b3d2397343be] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.005521836s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-702246 "pgrep -a kubelet"
I1006 14:51:42.692693  743851 config.go:182] Loaded profile config "custom-flannel-702246": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-702246 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-kpnnw" [710e9832-f2c5-493d-be10-69dff5434bde] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-kpnnw" [710e9832-f2c5-493d-be10-69dff5434bde] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.006835154s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-702246 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-702246 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-702246 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-702246 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-702246 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-702246 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (75.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-702246 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-702246 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m15.435845963s)
--- PASS: TestNetworkPlugins/group/flannel/Start (75.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (78.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-702246 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-702246 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m18.914857936s)
--- PASS: TestNetworkPlugins/group/bridge/Start (78.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-702246 "pgrep -a kubelet"
I1006 14:52:29.800187  743851 config.go:182] Loaded profile config "enable-default-cni-702246": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-702246 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-xz4kf" [386b275d-763b-4e1f-91ac-1ff2cc92d890] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-xz4kf" [386b275d-763b-4e1f-91ac-1ff2cc92d890] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.006089385s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-702246 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-702246 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-702246 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (60.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-311855 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-311855 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0: (1m0.76272546s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (60.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-dxdbl" [8752a51c-9e57-43c0-8f93-6f869cdbad74] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005915803s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-702246 "pgrep -a kubelet"
I1006 14:53:26.624388  743851 config.go:182] Loaded profile config "flannel-702246": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (13.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-702246 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-qb9vs" [e84658c8-8f08-46ec-8499-bdc49237272c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-qb9vs" [e84658c8-8f08-46ec-8499-bdc49237272c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 13.005436843s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (13.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-702246 "pgrep -a kubelet"
I1006 14:53:31.467753  743851 config.go:182] Loaded profile config "bridge-702246": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-702246 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-zn6dd" [01eafeb0-59bd-4746-8693-f1740dd70fbc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-zn6dd" [01eafeb0-59bd-4746-8693-f1740dd70fbc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004777318s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-702246 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-702246 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-702246 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-702246 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-702246 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-702246 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)
E1006 14:58:22.930399  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/flannel-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:58:25.491978  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/flannel-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:58:30.613780  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/flannel-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:58:31.969809  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/bridge-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:58:31.976282  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/bridge-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:58:31.987823  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/bridge-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:58:32.009503  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/bridge-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:58:32.050961  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/bridge-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:58:32.132607  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/bridge-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:58:32.294233  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/bridge-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:58:32.616061  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/bridge-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:58:33.258400  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/bridge-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:58:34.539950  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/bridge-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:58:34.802418  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/kindnet-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:58:37.101654  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/bridge-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:58:40.855390  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/flannel-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:58:42.223720  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/bridge-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:58:52.035763  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/enable-default-cni-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:58:52.465678  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/bridge-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:59:01.336788  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/flannel-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:59:01.876840  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/old-k8s-version-311855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:59:01.883341  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/old-k8s-version-311855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:59:01.894781  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/old-k8s-version-311855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:59:01.916180  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/old-k8s-version-311855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:59:01.957715  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/old-k8s-version-311855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:59:02.039546  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/old-k8s-version-311855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:59:02.201883  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/old-k8s-version-311855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:59:02.523763  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/old-k8s-version-311855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:59:03.165896  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/old-k8s-version-311855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:59:04.447229  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/old-k8s-version-311855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (76.63s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-764807 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-764807 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m16.625527426s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (76.63s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (77.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-203704 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-203704 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m17.944224325s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (77.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-311855 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [8310c002-4523-42f1-acef-863b108478fa] Pending
helpers_test.go:352: "busybox" [8310c002-4523-42f1-acef-863b108478fa] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [8310c002-4523-42f1-acef-863b108478fa] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.005267908s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-311855 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-311855 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-311855 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.142171326s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-311855 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (87.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-311855 --alsologtostderr -v=3
E1006 14:54:51.705713  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/auto-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:54:51.712396  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/auto-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:54:51.723908  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/auto-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:54:51.745427  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/auto-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:54:51.787017  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/auto-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:54:51.868559  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/auto-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:54:52.030737  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/auto-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:54:52.352283  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/auto-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:54:52.994130  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/auto-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:54:54.275864  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/auto-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:54:56.837316  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/auto-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:55:01.959545  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/auto-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:55:12.201658  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/auto-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-311855 --alsologtostderr -v=3: (1m27.109585688s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (87.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-764807 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [77e8d3ff-c869-45ae-98fb-f97fde63a993] Pending
helpers_test.go:352: "busybox" [77e8d3ff-c869-45ae-98fb-f97fde63a993] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [77e8d3ff-c869-45ae-98fb-f97fde63a993] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.00576509s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-764807 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-203704 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [4ae67c2d-d9ad-4f70-be0a-1a8e8578d06a] Pending
helpers_test.go:352: "busybox" [4ae67c2d-d9ad-4f70-be0a-1a8e8578d06a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [4ae67c2d-d9ad-4f70-be0a-1a8e8578d06a] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004837136s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-203704 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-764807 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-764807 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (80.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-764807 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-764807 --alsologtostderr -v=3: (1m20.447221018s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (80.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-203704 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-203704 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (84.82s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-203704 --alsologtostderr -v=3
E1006 14:55:32.683126  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/auto-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-203704 --alsologtostderr -v=3: (1m24.816174457s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (84.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-311855 -n old-k8s-version-311855
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-311855 -n old-k8s-version-311855: exit status 7 (77.045408ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-311855 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (46.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-311855 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0
E1006 14:55:50.941928  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/kindnet-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:55:50.948429  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/kindnet-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:55:50.959896  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/kindnet-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:55:50.981405  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/kindnet-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:55:51.022908  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/kindnet-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:55:51.104457  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/kindnet-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:55:51.266117  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/kindnet-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:55:51.587926  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/kindnet-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:55:52.229795  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/kindnet-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:55:53.511672  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/kindnet-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:55:56.073016  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/kindnet-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:56:01.195397  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/kindnet-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:56:11.436747  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/kindnet-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:56:13.645463  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/auto-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-311855 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0: (46.222301829s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-311855 -n old-k8s-version-311855
E1006 14:56:26.086840  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/calico-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:56:26.093322  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/calico-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:56:26.104772  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/calico-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:56:26.126295  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/calico-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (46.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (8.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1006 14:56:26.168142  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/calico-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-7xjqd" [f3e062b7-d0d7-494a-a9f6-8eccdccb655d] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1006 14:56:26.251579  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/calico-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:56:26.413212  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/calico-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:56:26.735064  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/calico-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:56:27.377331  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/calico-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-7xjqd" [f3e062b7-d0d7-494a-a9f6-8eccdccb655d] Running
E1006 14:56:28.659446  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/calico-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:56:31.221540  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/calico-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:56:31.919036  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/kindnet-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 8.0044186s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (8.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-7xjqd" [f3e062b7-d0d7-494a-a9f6-8eccdccb655d] Running
E1006 14:56:36.343469  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/calico-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004249439s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-311855 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-311855 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-311855 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-311855 -n old-k8s-version-311855
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-311855 -n old-k8s-version-311855: exit status 2 (259.234158ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-311855 -n old-k8s-version-311855
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-311855 -n old-k8s-version-311855: exit status 2 (248.049383ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-311855 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-311855 -n old-k8s-version-311855
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-311855 -n old-k8s-version-311855
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (62.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-915964 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
E1006 14:56:44.204290  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/custom-flannel-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:56:45.487064  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/custom-flannel-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:56:46.585557  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/calico-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-915964 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m2.356014926s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (62.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-764807 -n no-preload-764807
E1006 14:56:46.658800  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/functional-561811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-764807 -n no-preload-764807: exit status 7 (88.472332ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-764807 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (78.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-764807 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
E1006 14:56:48.049387  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/custom-flannel-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-764807 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m18.445514817s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-764807 -n no-preload-764807
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (78.86s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-203704 -n embed-certs-203704
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-203704 -n embed-certs-203704: exit status 7 (77.858998ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-203704 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (73.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-203704 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
E1006 14:56:53.171138  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/custom-flannel-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:57:03.413190  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/custom-flannel-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:57:03.586575  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/functional-561811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:57:07.067734  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/calico-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:57:12.881091  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/kindnet-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:57:23.895625  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/custom-flannel-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:57:30.096160  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/enable-default-cni-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:57:30.102709  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/enable-default-cni-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:57:30.114184  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/enable-default-cni-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:57:30.135628  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/enable-default-cni-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:57:30.177150  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/enable-default-cni-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:57:30.258694  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/enable-default-cni-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:57:30.420383  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/enable-default-cni-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:57:30.741823  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/enable-default-cni-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:57:31.383855  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/enable-default-cni-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:57:32.665305  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/enable-default-cni-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:57:35.227649  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/enable-default-cni-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:57:35.567566  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/auto-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:57:40.349878  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/enable-default-cni-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-203704 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m12.893665077s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-203704 -n embed-certs-203704
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (73.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-915964 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [a4cba9c2-607a-4731-9f90-097c45eb0f82] Pending
helpers_test.go:352: "busybox" [a4cba9c2-607a-4731-9f90-097c45eb0f82] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1006 14:57:48.029210  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/calico-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:57:48.923266  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/addons-395535/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:57:50.591479  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/enable-default-cni-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [a4cba9c2-607a-4731-9f90-097c45eb0f82] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.00691696s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-915964 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.57s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-915964 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-915964 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.176967723s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-915964 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (81.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-915964 --alsologtostderr -v=3
E1006 14:58:04.857770  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/custom-flannel-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-915964 --alsologtostderr -v=3: (1m21.394061571s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (81.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-nnp4t" [68ccc05a-4fab-4ca9-9efe-21c3f62374a5] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004442821s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-2mddv" [74102f4c-9424-4086-be5a-4eae8b661052] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-2mddv" [74102f4c-9424-4086-be5a-4eae8b661052] Running
E1006 14:58:11.073724  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/enable-default-cni-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.010583003s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-nnp4t" [68ccc05a-4fab-4ca9-9efe-21c3f62374a5] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004430694s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-764807 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-2mddv" [74102f4c-9424-4086-be5a-4eae8b661052] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004449138s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-203704 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-764807 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-764807 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-764807 -n no-preload-764807
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-764807 -n no-preload-764807: exit status 2 (287.49004ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-764807 -n no-preload-764807
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-764807 -n no-preload-764807: exit status 2 (289.37097ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-764807 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-764807 -n no-preload-764807
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-764807 -n no-preload-764807
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-203704 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-203704 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-203704 -n embed-certs-203704
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-203704 -n embed-certs-203704: exit status 2 (305.29832ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-203704 -n embed-certs-203704
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-203704 -n embed-certs-203704: exit status 2 (298.955816ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-203704 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-203704 -n embed-certs-203704
E1006 14:58:21.007574  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/flannel-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-203704 -n embed-certs-203704
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (43.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-320304 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-320304 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (43.238772765s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (43.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-320304 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-320304 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.092654672s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (86.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-320304 --alsologtostderr -v=3
E1006 14:59:07.008601  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/old-k8s-version-311855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:59:09.951509  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/calico-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:59:12.130663  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/old-k8s-version-311855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:59:12.947601  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/bridge-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-320304 --alsologtostderr -v=3: (1m26.43716286s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (86.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-915964 -n default-k8s-diff-port-915964
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-915964 -n default-k8s-diff-port-915964: exit status 7 (72.053764ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-915964 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (46.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-915964 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
E1006 14:59:22.371963  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/old-k8s-version-311855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:59:26.781853  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/custom-flannel-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:59:42.298213  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/flannel-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:59:42.853870  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/old-k8s-version-311855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:59:51.705219  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/auto-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:59:53.909346  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/bridge-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-915964 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (45.798541036s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-915964 -n default-k8s-diff-port-915964
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (46.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-krjgq" [5f6ee33b-dc9f-4408-a5ea-d1c46050d68d] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003750783s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-krjgq" [5f6ee33b-dc9f-4408-a5ea-d1c46050d68d] Running
E1006 15:00:13.957251  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/enable-default-cni-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:00:15.003404  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/no-preload-764807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:00:15.009902  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/no-preload-764807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:00:15.021342  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/no-preload-764807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:00:15.042804  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/no-preload-764807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:00:15.084335  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/no-preload-764807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:00:15.165906  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/no-preload-764807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:00:15.327497  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/no-preload-764807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:00:15.649672  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/no-preload-764807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:00:16.291092  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/no-preload-764807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005616518s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-915964 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-915964 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-915964 --alsologtostderr -v=1
E1006 15:00:17.572912  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/no-preload-764807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-915964 -n default-k8s-diff-port-915964
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-915964 -n default-k8s-diff-port-915964: exit status 2 (270.35245ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-915964 -n default-k8s-diff-port-915964
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-915964 -n default-k8s-diff-port-915964: exit status 2 (264.436483ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-915964 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-915964 -n default-k8s-diff-port-915964
E1006 15:00:19.409599  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/auto-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-915964 -n default-k8s-diff-port-915964
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.95s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-320304 -n newest-cni-320304
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-320304 -n newest-cni-320304: exit status 7 (77.470733ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-320304 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (33.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-320304 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
E1006 15:00:35.498840  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/no-preload-764807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:00:50.941718  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/kindnet-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:00:55.980498  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/no-preload-764807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:01:04.220605  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/flannel-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-320304 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (33.545053212s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-320304 -n newest-cni-320304
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (33.91s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-320304 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-320304 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-320304 -n newest-cni-320304
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-320304 -n newest-cni-320304: exit status 2 (333.724141ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-320304 -n newest-cni-320304
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-320304 -n newest-cni-320304: exit status 2 (314.301926ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-320304 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p newest-cni-320304 --alsologtostderr -v=1: (1.022694064s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-320304 -n newest-cni-320304
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-320304 -n newest-cni-320304
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.32s)
E1006 15:01:15.831114  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/bridge-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:01:18.645843  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/kindnet-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:01:26.086912  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/calico-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:01:36.942047  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/no-preload-764807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:01:42.915651  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/custom-flannel-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:01:45.737813  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/old-k8s-version-311855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:01:53.792824  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/calico-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:02:03.586918  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/functional-561811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:02:10.623208  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/custom-flannel-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test skip (40/321)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.1/cached-images 0
15 TestDownloadOnly/v1.34.1/binaries 0
16 TestDownloadOnly/v1.34.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.34
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
56 TestHyperKitDriverInstallOrUpdate 0
57 TestHyperkitDriverSkipUpgrade 0
108 TestFunctional/parallel/DockerEnv 0
109 TestFunctional/parallel/PodmanEnv 0
117 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.02
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
124 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
125 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
126 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
127 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
131 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
132 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
157 TestFunctionalNewestKubernetes 0
158 TestGvisorAddon 0
180 TestImageBuild 0
207 TestKicCustomNetwork 0
208 TestKicExistingNetwork 0
209 TestKicCustomSubnet 0
210 TestKicStaticIP 0
242 TestChangeNoneUser 0
245 TestScheduledStopWindows 0
247 TestSkaffold 0
249 TestInsufficientStorage 0
253 TestMissingContainerUpgrade 0
259 TestNetworkPlugins/group/kubenet 5.04
267 TestNetworkPlugins/group/cilium 4.09
279 TestStartStop/group/disable-driver-mounts 0.16
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.34s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-395535 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.34s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-702246 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-702246

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-702246

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-702246

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-702246

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-702246

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-702246

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-702246

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-702246

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-702246

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-702246

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-702246"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-702246"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-702246"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-702246

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-702246"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-702246"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-702246" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-702246" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-702246" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-702246" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-702246" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-702246" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-702246" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-702246" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-702246"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-702246"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-702246"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-702246"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-702246"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-702246" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-702246" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-702246" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-702246"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-702246"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-702246"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-702246"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-702246"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-702246

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-702246"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-702246"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-702246"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-702246"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-702246"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-702246"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-702246"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-702246"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-702246"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-702246"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-702246"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-702246"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-702246"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-702246"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-702246"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-702246"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-702246"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-702246"

                                                
                                                
----------------------- debugLogs end: kubenet-702246 [took: 4.869414537s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-702246" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-702246
--- SKIP: TestNetworkPlugins/group/kubenet (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-702246 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-702246

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-702246

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-702246

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-702246

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-702246

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-702246

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-702246

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-702246

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-702246

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-702246

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702246"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702246"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702246"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-702246

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702246"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702246"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-702246" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-702246" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-702246" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-702246" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-702246" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-702246" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-702246" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-702246" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702246"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702246"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702246"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702246"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702246"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-702246

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-702246

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-702246" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-702246" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-702246

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-702246

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-702246" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-702246" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-702246" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-702246" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-702246" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702246"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702246"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702246"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702246"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702246"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-702246

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702246"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702246"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702246"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702246"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702246"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702246"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702246"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702246"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702246"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702246"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702246"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702246"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702246"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702246"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702246"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702246"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702246"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-702246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702246"

                                                
                                                
----------------------- debugLogs end: cilium-702246 [took: 3.922140531s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-702246" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-702246
--- SKIP: TestNetworkPlugins/group/cilium (4.09s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-602250" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-602250
E1006 14:56:43.562605  743851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-739942/.minikube/profiles/custom-flannel-702246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
Copied to clipboard